Test Report: Docker_Linux_crio 12230

                    
                      098adff14f97e55ded5626b0a90c858c09622337:2021-08-13:19986
                    
                

Test fail (14/264)

x
+
TestAddons/parallel/Ingress (315.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-6zkjg" [e56baacd-a202-4c5f-96eb-7b26dba4345c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 4.387205ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813200856-13784 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210813200856-13784 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [b6e94126-81c4-4da5-a9a9-cbc4c96676d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [b6e94126-81c4-4da5-a9a9-cbc4c96676d0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 24.005212887s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210813200856-13784 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.788912277s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:224: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813200856-13784 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210813200856-13784 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.800760069s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:262: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable ingress --alsologtostderr -v=1
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable ingress --alsologtostderr -v=1: (28.554299738s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210813200856-13784
helpers_test.go:236: (dbg) docker inspect addons-20210813200856-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d",
	        "Created": "2021-08-13T20:09:01.765537405Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 15379,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:09:02.361691605Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/hosts",
	        "LogPath": "/var/lib/docker/containers/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d-json.log",
	        "Name": "/addons-20210813200856-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210813200856-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210813200856-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/91df1cac9cf563fe5102c73f4ca7ce01d1599fc6b42cc11a619f74d1105050a4-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91df1cac9cf563fe5102c73f4ca7ce01d1599fc6b42cc11a619f74d1105050a4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91df1cac9cf563fe5102c73f4ca7ce01d1599fc6b42cc11a619f74d1105050a4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91df1cac9cf563fe5102c73f4ca7ce01d1599fc6b42cc11a619f74d1105050a4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-20210813200856-13784",
	                "Source": "/var/lib/docker/volumes/addons-20210813200856-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210813200856-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210813200856-13784",
	                "name.minikube.sigs.k8s.io": "addons-20210813200856-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8578d5da80fe229a9874a269b107fe3b24f1075604de2f77d43eb92374c14955",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8578d5da80fe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210813200856-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d6667d21286d"
	                    ],
	                    "NetworkID": "32a0f66c3ea003d4a952d815a142a02685c101eab3ba4b825b6dded73b6a44bf",
	                    "EndpointID": "558fc62cf2e36f27205dea38fd7df0cf1e0f476e00a8f03533ed21ba04df6a06",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210813200856-13784 -n addons-20210813200856-13784
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 logs -n 25
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|--------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                 |               Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------|--------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                | download-only-20210813200750-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:25 UTC | Fri, 13 Aug 2021 20:08:25 UTC |
	| delete  | -p                                   | download-only-20210813200750-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:25 UTC | Fri, 13 Aug 2021 20:08:25 UTC |
	|         | download-only-20210813200750-13784   |                                      |         |         |                               |                               |
	| delete  | -p                                   | download-only-20210813200750-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:25 UTC | Fri, 13 Aug 2021 20:08:26 UTC |
	|         | download-only-20210813200750-13784   |                                      |         |         |                               |                               |
	| delete  | -p                                   | download-docker-20210813200826-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:56 UTC | Fri, 13 Aug 2021 20:08:56 UTC |
	|         | download-docker-20210813200826-13784 |                                      |         |         |                               |                               |
	| start   | -p addons-20210813200856-13784       | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:56 UTC | Fri, 13 Aug 2021 20:11:46 UTC |
	|         | --wait=true --memory=4000            |                                      |         |         |                               |                               |
	|         | --alsologtostderr                    |                                      |         |         |                               |                               |
	|         | --addons=registry                    |                                      |         |         |                               |                               |
	|         | --addons=metrics-server              |                                      |         |         |                               |                               |
	|         | --addons=olm                         |                                      |         |         |                               |                               |
	|         | --addons=volumesnapshots             |                                      |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver         |                                      |         |         |                               |                               |
	|         | --driver=docker                      |                                      |         |         |                               |                               |
	|         | --container-runtime=crio             |                                      |         |         |                               |                               |
	|         | --addons=ingress                     |                                      |         |         |                               |                               |
	|         | --addons=helm-tiller                 |                                      |         |         |                               |                               |
	| -p      | addons-20210813200856-13784          | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:00 UTC | Fri, 13 Aug 2021 20:12:16 UTC |
	|         | addons enable gcp-auth --force       |                                      |         |         |                               |                               |
	| -p      | addons-20210813200856-13784          | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:21 UTC | Fri, 13 Aug 2021 20:12:21 UTC |
	|         | addons disable metrics-server        |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20210813200856-13784 ip       | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:39 UTC | Fri, 13 Aug 2021 20:12:39 UTC |
	| -p      | addons-20210813200856-13784          | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:39 UTC | Fri, 13 Aug 2021 20:12:39 UTC |
	|         | addons disable registry              |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20210813200856-13784          | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:13:28 UTC | Fri, 13 Aug 2021 20:13:29 UTC |
	|         | addons disable helm-tiller           |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20210813200856-13784          | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:13:27 UTC | Fri, 13 Aug 2021 20:13:33 UTC |
	|         | addons disable gcp-auth              |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20210813200856-13784          | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:13:32 UTC | Fri, 13 Aug 2021 20:13:39 UTC |
	|         | addons disable                       |                                      |         |         |                               |                               |
	|         | csi-hostpath-driver                  |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20210813200856-13784          | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:13:39 UTC | Fri, 13 Aug 2021 20:13:39 UTC |
	|         | addons disable volumesnapshots       |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	| -p      | addons-20210813200856-13784          | addons-20210813200856-13784          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:17:25 UTC | Fri, 13 Aug 2021 20:17:54 UTC |
	|         | addons disable ingress               |                                      |         |         |                               |                               |
	|         | --alsologtostderr -v=1               |                                      |         |         |                               |                               |
	|---------|--------------------------------------|--------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:56
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:56.311642   14717 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:56.311812   14717 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:56.311820   14717 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:56.311823   14717 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:56.311904   14717 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:08:56.312180   14717 out.go:305] Setting JSON to false
	I0813 20:08:56.345916   14717 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":3099,"bootTime":1628882237,"procs":134,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:56.346043   14717 start.go:121] virtualization: kvm guest
	I0813 20:08:56.348652   14717 out.go:177] * [addons-20210813200856-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:08:56.350108   14717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:08:56.348797   14717 notify.go:169] Checking for updates...
	I0813 20:08:56.351579   14717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:08:56.352981   14717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:08:56.354424   14717 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:08:56.354581   14717 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:08:56.396102   14717 docker.go:132] docker version: linux-19.03.15
	I0813 20:08:56.396174   14717 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:56.471883   14717 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:56.42909066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:56.471986   14717 docker.go:244] overlay module found
	I0813 20:08:56.474108   14717 out.go:177] * Using the docker driver based on user configuration
	I0813 20:08:56.474132   14717 start.go:278] selected driver: docker
	I0813 20:08:56.474138   14717 start.go:751] validating driver "docker" against <nil>
	I0813 20:08:56.474156   14717 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:08:56.474258   14717 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:08:56.474278   14717 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:08:56.475614   14717 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:08:56.476471   14717 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:56.550437   14717 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:56.509157527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:56.550534   14717 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:08:56.550714   14717 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:08:56.550737   14717 cni.go:93] Creating CNI manager for ""
	I0813 20:08:56.550743   14717 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:08:56.550750   14717 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:08:56.550760   14717 start_flags.go:277] config:
	{Name:addons-20210813200856-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813200856-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:56.552838   14717 out.go:177] * Starting control plane node addons-20210813200856-13784 in cluster addons-20210813200856-13784
	I0813 20:08:56.552880   14717 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:08:56.554470   14717 out.go:177] * Pulling base image ...
	I0813 20:08:56.554493   14717 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:08:56.554521   14717 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:08:56.554523   14717 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:08:56.554538   14717 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:56.554735   14717 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:08:56.554755   14717 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:08:56.555036   14717 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/config.json ...
	I0813 20:08:56.555060   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/config.json: {Name:mk85ca5d9875bbbfa4265b90f9695e8ceab6cb90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:56.635806   14717 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:08:56.635831   14717 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:08:56.635846   14717 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:08:56.635886   14717 start.go:313] acquiring machines lock for addons-20210813200856-13784: {Name:mkbb24f1205077781013655951cd3cacdcf605ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:56.636023   14717 start.go:317] acquired machines lock for "addons-20210813200856-13784" in 112.353µs
	I0813 20:08:56.636049   14717 start.go:89] Provisioning new machine with config: &{Name:addons-20210813200856-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813200856-13784 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:08:56.636133   14717 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:08:56.638357   14717 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0813 20:08:56.638548   14717 start.go:160] libmachine.API.Create for "addons-20210813200856-13784" (driver="docker")
	I0813 20:08:56.638582   14717 client.go:168] LocalClient.Create starting
	I0813 20:08:56.638681   14717 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:08:56.707303   14717 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:08:56.833288   14717 cli_runner.go:115] Run: docker network inspect addons-20210813200856-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:08:56.869399   14717 cli_runner.go:162] docker network inspect addons-20210813200856-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:08:56.869501   14717 network_create.go:255] running [docker network inspect addons-20210813200856-13784] to gather additional debugging logs...
	I0813 20:08:56.869526   14717 cli_runner.go:115] Run: docker network inspect addons-20210813200856-13784
	W0813 20:08:56.903139   14717 cli_runner.go:162] docker network inspect addons-20210813200856-13784 returned with exit code 1
	I0813 20:08:56.903171   14717 network_create.go:258] error running [docker network inspect addons-20210813200856-13784]: docker network inspect addons-20210813200856-13784: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210813200856-13784
	I0813 20:08:56.903190   14717 network_create.go:260] output of [docker network inspect addons-20210813200856-13784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210813200856-13784
	
	** /stderr **
	I0813 20:08:56.903249   14717 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:08:56.936473   14717 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000140310] misses:0}
	I0813 20:08:56.936515   14717 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:08:56.936534   14717 network_create.go:106] attempt to create docker network addons-20210813200856-13784 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 20:08:56.936576   14717 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210813200856-13784
	I0813 20:08:57.011262   14717 network_create.go:90] docker network addons-20210813200856-13784 192.168.49.0/24 created
	I0813 20:08:57.011294   14717 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210813200856-13784" container
	I0813 20:08:57.011353   14717 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:08:57.045427   14717 cli_runner.go:115] Run: docker volume create addons-20210813200856-13784 --label name.minikube.sigs.k8s.io=addons-20210813200856-13784 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:08:57.081215   14717 oci.go:102] Successfully created a docker volume addons-20210813200856-13784
	I0813 20:08:57.081292   14717 cli_runner.go:115] Run: docker run --rm --name addons-20210813200856-13784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813200856-13784 --entrypoint /usr/bin/test -v addons-20210813200856-13784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:09:01.652609   14717 cli_runner.go:168] Completed: docker run --rm --name addons-20210813200856-13784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813200856-13784 --entrypoint /usr/bin/test -v addons-20210813200856-13784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (4.571280881s)
	I0813 20:09:01.652642   14717 oci.go:106] Successfully prepared a docker volume addons-20210813200856-13784
	W0813 20:09:01.652676   14717 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:09:01.652686   14717 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:09:01.652696   14717 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:09:01.652725   14717 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:09:01.652728   14717 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:09:01.652781   14717 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210813200856-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:09:01.728119   14717 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210813200856-13784 --name addons-20210813200856-13784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813200856-13784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210813200856-13784 --network addons-20210813200856-13784 --ip 192.168.49.2 --volume addons-20210813200856-13784:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:09:02.369692   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Running}}
	I0813 20:09:02.411115   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:02.451212   14717 cli_runner.go:115] Run: docker exec addons-20210813200856-13784 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:09:02.583518   14717 oci.go:278] the created container "addons-20210813200856-13784" has a running status.
	I0813 20:09:02.583556   14717 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa...
	I0813 20:09:02.981039   14717 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:09:03.415532   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:03.457293   14717 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:09:03.457317   14717 kic_runner.go:115] Args: [docker exec --privileged addons-20210813200856-13784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:09:05.384998   14717 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210813200856-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.732166014s)
	I0813 20:09:05.385027   14717 kic.go:188] duration metric: took 3.732299 seconds to extract preloaded images to volume
	I0813 20:09:05.385114   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:05.420681   14717 machine.go:88] provisioning docker machine ...
	I0813 20:09:05.420721   14717 ubuntu.go:169] provisioning hostname "addons-20210813200856-13784"
	I0813 20:09:05.420806   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:05.455921   14717 main.go:130] libmachine: Using SSH client type: native
	I0813 20:09:05.456137   14717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0813 20:09:05.456158   14717 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210813200856-13784 && echo "addons-20210813200856-13784" | sudo tee /etc/hostname
	I0813 20:09:05.664433   14717 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210813200856-13784
	
	I0813 20:09:05.664519   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:05.700180   14717 main.go:130] libmachine: Using SSH client type: native
	I0813 20:09:05.700340   14717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0813 20:09:05.700362   14717 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210813200856-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210813200856-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210813200856-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:09:05.820768   14717 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:09:05.820802   14717 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:09:05.820825   14717 ubuntu.go:177] setting up certificates
	I0813 20:09:05.820837   14717 provision.go:83] configureAuth start
	I0813 20:09:05.820894   14717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813200856-13784
	I0813 20:09:05.855796   14717 provision.go:138] copyHostCerts
	I0813 20:09:05.855892   14717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:09:05.856007   14717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:09:05.856076   14717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:09:05.856129   14717 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.addons-20210813200856-13784 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210813200856-13784]
	I0813 20:09:06.168537   14717 provision.go:172] copyRemoteCerts
	I0813 20:09:06.168602   14717 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:09:06.168647   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:06.203958   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:06.292297   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:09:06.310552   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:09:06.325447   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:09:06.340226   14717 provision.go:86] duration metric: configureAuth took 519.377981ms
	I0813 20:09:06.340247   14717 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:09:06.340394   14717 config.go:177] Loaded profile config "addons-20210813200856-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:09:06.340510   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:06.375512   14717 main.go:130] libmachine: Using SSH client type: native
	I0813 20:09:06.375670   14717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0813 20:09:06.375687   14717 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:09:06.966222   14717 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:09:06.966251   14717 machine.go:91] provisioned docker machine in 1.545547927s
	I0813 20:09:06.966263   14717 client.go:171] LocalClient.Create took 10.327671302s
	I0813 20:09:06.966285   14717 start.go:168] duration metric: libmachine.API.Create for "addons-20210813200856-13784" took 10.327735551s
	I0813 20:09:06.966296   14717 start.go:267] post-start starting for "addons-20210813200856-13784" (driver="docker")
	I0813 20:09:06.966303   14717 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:09:06.966361   14717 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:09:06.966413   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:07.004867   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:07.092980   14717 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:09:07.095583   14717 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:09:07.095607   14717 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:09:07.095622   14717 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:09:07.095633   14717 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:09:07.095645   14717 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:09:07.095694   14717 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:09:07.095717   14717 start.go:270] post-start completed in 129.414734ms
	I0813 20:09:07.095978   14717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813200856-13784
	I0813 20:09:07.132031   14717 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/config.json ...
	I0813 20:09:07.132302   14717 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:09:07.132354   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:07.166930   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:07.255425   14717 start.go:129] duration metric: createHost completed in 10.619275301s
	I0813 20:09:07.255458   14717 start.go:80] releasing machines lock for "addons-20210813200856-13784", held for 10.619420796s
	I0813 20:09:07.255540   14717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813200856-13784
	I0813 20:09:07.290867   14717 ssh_runner.go:149] Run: systemctl --version
	I0813 20:09:07.290921   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:07.290926   14717 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:09:07.290986   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:07.337749   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:07.338584   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:07.679216   14717 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:09:07.765547   14717 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:09:07.773840   14717 docker.go:153] disabling docker service ...
	I0813 20:09:07.773893   14717 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:09:07.782830   14717 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:09:07.791332   14717 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:09:07.855481   14717 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:09:07.919150   14717 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:09:07.927580   14717 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:09:07.938796   14717 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:09:07.945988   14717 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:09:07.946011   14717 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:09:07.953059   14717 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:09:07.959697   14717 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:09:07.959742   14717 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:09:07.965900   14717 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:09:07.971320   14717 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:09:08.024674   14717 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:09:08.033207   14717 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:09:08.033294   14717 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:09:08.036129   14717 start.go:413] Will wait 60s for crictl version
	I0813 20:09:08.036170   14717 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:09:08.162770   14717 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:09:08.162868   14717 ssh_runner.go:149] Run: crio --version
	I0813 20:09:08.227495   14717 ssh_runner.go:149] Run: crio --version
	I0813 20:09:08.288896   14717 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:09:08.288961   14717 cli_runner.go:115] Run: docker network inspect addons-20210813200856-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:09:08.324770   14717 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:09:08.327923   14717 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:09:08.336387   14717 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:09:08.336442   14717 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:09:08.381367   14717 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:09:08.381387   14717 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:09:08.381425   14717 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:09:08.410706   14717 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:09:08.410727   14717 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:09:08.410787   14717 ssh_runner.go:149] Run: crio config
	I0813 20:09:08.476457   14717 cni.go:93] Creating CNI manager for ""
	I0813 20:09:08.476482   14717 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:09:08.476492   14717 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:09:08.476503   14717 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210813200856-13784 NodeName:addons-20210813200856-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:09:08.476611   14717 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210813200856-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:09:08.476695   14717 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-20210813200856-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210813200856-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:09:08.476740   14717 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:09:08.484656   14717 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:09:08.484711   14717 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:09:08.490917   14717 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (558 bytes)
	I0813 20:09:08.502004   14717 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:09:08.513390   14717 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0813 20:09:08.524457   14717 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:09:08.526995   14717 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:09:08.535060   14717 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784 for IP: 192.168.49.2
	I0813 20:09:08.535093   14717 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:09:08.697251   14717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt ...
	I0813 20:09:08.697283   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt: {Name:mka042cb0663c4066024ffb89281c455bf0f1daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:08.697498   14717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key ...
	I0813 20:09:08.697512   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key: {Name:mkb3b1653c722e8714b1f3c0723bfdfd08979327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:08.697599   14717 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:09:09.044724   14717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt ...
	I0813 20:09:09.044758   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt: {Name:mk115ab006902a56465fd5166ddff8f455416f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:09.044967   14717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key ...
	I0813 20:09:09.044983   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key: {Name:mk93af14c476676efd7a0bf54f659bcae33707eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:09.045095   14717 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.key
	I0813 20:09:09.045107   14717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt with IP's: []
	I0813 20:09:09.200406   14717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt ...
	I0813 20:09:09.200436   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: {Name:mk0bf9c1af1c035e5aba48abb41fc33910b45eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:09.200619   14717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.key ...
	I0813 20:09:09.200632   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.key: {Name:mk1857b0ce2ab2d194ffb84312fe71844b23ef18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:09.200721   14717 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.key.dd3b5fb2
	I0813 20:09:09.200731   14717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:09:09.464003   14717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.crt.dd3b5fb2 ...
	I0813 20:09:09.464039   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.crt.dd3b5fb2: {Name:mk711e920b04bee5f490113c0cf4b13507156638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:09.464230   14717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.key.dd3b5fb2 ...
	I0813 20:09:09.464244   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.key.dd3b5fb2: {Name:mkea46bc11dc01c78ebab74a807ab4566d8f4ea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:09.464334   14717 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.crt
	I0813 20:09:09.464392   14717 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.key
	I0813 20:09:09.464445   14717 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/proxy-client.key
	I0813 20:09:09.464465   14717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/proxy-client.crt with IP's: []
	I0813 20:09:09.571907   14717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/proxy-client.crt ...
	I0813 20:09:09.571936   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/proxy-client.crt: {Name:mke6b4184d4dafe8e4d564dc6b1aab1555fe58af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:09.572110   14717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/proxy-client.key ...
	I0813 20:09:09.572123   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/proxy-client.key: {Name:mk011a03f279cee645ac57707f674094f1395a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:09.572335   14717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:09:09.572371   14717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:09:09.572397   14717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:09:09.572425   14717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:09:09.573416   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:09:09.658504   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:09:09.674903   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:09:09.690097   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:09:09.705173   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:09:09.719995   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:09:09.734901   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:09:09.749962   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:09:09.765079   14717 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:09:09.780680   14717 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:09:09.791483   14717 ssh_runner.go:149] Run: openssl version
	I0813 20:09:09.800278   14717 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:09:09.808536   14717 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:09:09.811222   14717 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:09:09.811281   14717 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:09:09.815577   14717 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:09:09.822409   14717 kubeadm.go:390] StartCluster: {Name:addons-20210813200856-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813200856-13784 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:09:09.822484   14717 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:09:09.822543   14717 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:09:09.846641   14717 cri.go:76] found id: ""
	I0813 20:09:09.846703   14717 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:09:09.852993   14717 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:09:09.859041   14717 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:09:09.859095   14717 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:09:09.865255   14717 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:09:09.865290   14717 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:09:10.146545   14717 out.go:204]   - Generating certificates and keys ...
	I0813 20:09:12.238623   14717 out.go:204]   - Booting up control plane ...
	I0813 20:09:27.285805   14717 out.go:204]   - Configuring RBAC rules ...
	I0813 20:09:27.697270   14717 cni.go:93] Creating CNI manager for ""
	I0813 20:09:27.697306   14717 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:09:27.699118   14717 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:09:27.699201   14717 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:09:27.702729   14717 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:09:27.702747   14717 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:09:27.714584   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:09:28.032117   14717 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:09:28.032217   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:28.032217   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=addons-20210813200856-13784 minikube.k8s.io/updated_at=2021_08_13T20_09_28_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:28.101918   14717 ops.go:34] apiserver oom_adj: -16
	I0813 20:09:28.101909   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:28.679122   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:29.178604   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:29.678922   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:30.179149   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:30.678504   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:31.178555   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:31.679471   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:32.179224   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:32.678559   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:33.179386   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:33.678573   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:34.178852   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:34.679553   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:35.178638   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:35.678651   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:36.179286   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:36.679564   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:37.179281   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:37.678754   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:38.679461   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:40.325827   14717 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.646310658s)
	I0813 20:09:40.679242   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:41.679383   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:42.179079   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:42.679552   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:43.178727   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:43.678551   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:44.178883   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:44.679128   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:45.179195   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:45.679208   14717 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:45.767486   14717 kubeadm.go:985] duration metric: took 17.735321942s to wait for elevateKubeSystemPrivileges.
	I0813 20:09:45.767515   14717 kubeadm.go:392] StartCluster complete in 35.94511225s
	I0813 20:09:45.767536   14717 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:45.767690   14717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:09:45.768126   14717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:46.283679   14717 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210813200856-13784" rescaled to 1
	I0813 20:09:46.283751   14717 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:09:46.283804   14717 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:09:46.283806   14717 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0813 20:09:46.285198   14717 out.go:177] * Verifying Kubernetes components...
	I0813 20:09:46.283939   14717 addons.go:59] Setting volumesnapshots=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.285332   14717 addons.go:135] Setting addon volumesnapshots=true in "addons-20210813200856-13784"
	I0813 20:09:46.285393   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.283945   14717 addons.go:59] Setting ingress=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.285430   14717 addons.go:135] Setting addon ingress=true in "addons-20210813200856-13784"
	I0813 20:09:46.283953   14717 addons.go:59] Setting metrics-server=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.285499   14717 addons.go:135] Setting addon metrics-server=true in "addons-20210813200856-13784"
	I0813 20:09:46.285549   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.286001   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.286074   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.283958   14717 addons.go:59] Setting helm-tiller=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.286175   14717 addons.go:135] Setting addon helm-tiller=true in "addons-20210813200856-13784"
	I0813 20:09:46.286205   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.286218   14717 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:09:46.286640   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.283963   14717 addons.go:59] Setting olm=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.286729   14717 addons.go:135] Setting addon olm=true in "addons-20210813200856-13784"
	I0813 20:09:46.286752   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.283956   14717 addons.go:59] Setting default-storageclass=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.286802   14717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210813200856-13784"
	I0813 20:09:46.287101   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.287190   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.283971   14717 addons.go:59] Setting registry=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.287386   14717 addons.go:135] Setting addon registry=true in "addons-20210813200856-13784"
	I0813 20:09:46.287407   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.283972   14717 addons.go:59] Setting storage-provisioner=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.287499   14717 addons.go:135] Setting addon storage-provisioner=true in "addons-20210813200856-13784"
	W0813 20:09:46.287508   14717 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:09:46.287534   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.287869   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.283977   14717 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210813200856-13784"
	I0813 20:09:46.287968   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.288021   14717 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210813200856-13784"
	I0813 20:09:46.284094   14717 config.go:177] Loaded profile config "addons-20210813200856-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:09:46.288071   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.288431   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.288481   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.288935   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.398064   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0813 20:09:46.398139   14717 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0813 20:09:46.398152   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0813 20:09:46.398213   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.399647   14717 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0813 20:09:46.401715   14717 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0813 20:09:46.413651   14717 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0813 20:09:46.410147   14717 addons.go:135] Setting addon default-storageclass=true in "addons-20210813200856-13784"
	I0813 20:09:46.413762   14717 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	W0813 20:09:46.413766   14717 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:09:46.413775   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0813 20:09:46.413826   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.413825   14717 host.go:66] Checking if "addons-20210813200856-13784" exists ...
	I0813 20:09:46.415228   14717 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0813 20:09:46.414493   14717 cli_runner.go:115] Run: docker container inspect addons-20210813200856-13784 --format={{.State.Status}}
	I0813 20:09:46.415291   14717 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:09:46.415374   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:09:46.415425   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.420809   14717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:09:46.422014   14717 out.go:177]   - Using image registry:2.7.1
	I0813 20:09:46.420918   14717 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:09:46.422109   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:09:46.423184   14717 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0813 20:09:46.422177   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.423289   14717 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0813 20:09:46.423300   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0813 20:09:46.423365   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.429556   14717 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 20:09:46.435732   14717 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 20:09:46.437109   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0813 20:09:46.434057   14717 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0813 20:09:46.437202   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0813 20:09:46.437273   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.438393   14717 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0813 20:09:46.438458   14717 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0813 20:09:46.438473   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0813 20:09:46.438529   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.439738   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0813 20:09:46.440946   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0813 20:09:46.442168   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0813 20:09:46.443389   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0813 20:09:46.444514   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0813 20:09:46.445683   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0813 20:09:46.446826   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0813 20:09:46.448045   14717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0813 20:09:46.448106   14717 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0813 20:09:46.448122   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0813 20:09:46.448183   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.480675   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.484050   14717 node_ready.go:35] waiting up to 6m0s for node "addons-20210813200856-13784" to be "Ready" ...
	I0813 20:09:46.484360   14717 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:09:46.489126   14717 node_ready.go:49] node "addons-20210813200856-13784" has status "Ready":"True"
	I0813 20:09:46.489144   14717 node_ready.go:38] duration metric: took 5.063749ms waiting for node "addons-20210813200856-13784" to be "Ready" ...
	I0813 20:09:46.489155   14717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:09:46.501767   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.505624   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.510568   14717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-7npkg" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:46.519949   14717 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:09:46.519973   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:09:46.520027   14717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813200856-13784
	I0813 20:09:46.531587   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.540649   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.549255   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.552020   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.552490   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.590771   14717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200856-13784/id_rsa Username:docker}
	I0813 20:09:46.778494   14717 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0813 20:09:46.778524   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0813 20:09:46.778753   14717 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0813 20:09:46.778773   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0813 20:09:46.871797   14717 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0813 20:09:46.871828   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0813 20:09:46.965436   14717 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0813 20:09:46.965477   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0813 20:09:46.970332   14717 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0813 20:09:46.970356   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0813 20:09:46.974102   14717 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0813 20:09:46.974124   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0813 20:09:46.978990   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:09:46.979100   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0813 20:09:46.979257   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:09:47.059810   14717 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0813 20:09:47.059835   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0813 20:09:47.064476   14717 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:09:47.064537   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0813 20:09:47.064475   14717 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0813 20:09:47.064601   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0813 20:09:47.070434   14717 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 20:09:47.070452   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0813 20:09:47.080329   14717 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0813 20:09:47.080390   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0813 20:09:47.080724   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0813 20:09:47.175134   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 20:09:47.176410   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 20:09:47.179628   14717 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:09:47.179664   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:09:47.260232   14717 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0813 20:09:47.260260   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0813 20:09:47.276729   14717 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0813 20:09:47.276758   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0813 20:09:47.359533   14717 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 20:09:47.377025   14717 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:09:47.377060   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:09:47.458968   14717 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0813 20:09:47.459002   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0813 20:09:47.466041   14717 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0813 20:09:47.466070   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0813 20:09:47.579848   14717 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:47.579876   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0813 20:09:47.660294   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:09:47.678448   14717 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0813 20:09:47.678476   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0813 20:09:47.859069   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:47.868517   14717 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0813 20:09:47.868542   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0813 20:09:48.159429   14717 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0813 20:09:48.159457   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0813 20:09:48.262164   14717 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0813 20:09:48.262260   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0813 20:09:48.365208   14717 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0813 20:09:48.365233   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0813 20:09:48.482205   14717 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0813 20:09:48.482234   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0813 20:09:48.577294   14717 pod_ready.go:102] pod "coredns-558bd4d5db-7npkg" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:48.673748   14717 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0813 20:09:48.673776   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0813 20:09:48.782617   14717 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0813 20:09:48.782639   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0813 20:09:48.963748   14717 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 20:09:48.963818   14717 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0813 20:09:49.071726   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 20:09:49.980831   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (3.001691578s)
	I0813 20:09:49.980871   14717 addons.go:313] Verifying addon ingress=true in "addons-20210813200856-13784"
	I0813 20:09:49.982796   14717 out.go:177] * Verifying ingress addon...
	I0813 20:09:49.981227   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.002208134s)
	I0813 20:09:49.981268   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.001991503s)
	I0813 20:09:49.981304   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.90055836s)
	I0813 20:09:49.982901   14717 addons.go:313] Verifying addon registry=true in "addons-20210813200856-13784"
	I0813 20:09:49.981346   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (2.806183762s)
	I0813 20:09:49.984328   14717 out.go:177] * Verifying registry addon...
	I0813 20:09:49.984718   14717 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0813 20:09:49.986071   14717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0813 20:09:50.167437   14717 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0813 20:09:50.167464   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:50.171321   14717 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 20:09:50.171339   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:50.583650   14717 pod_ready.go:102] pod "coredns-558bd4d5db-7npkg" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:50.769431   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:50.770275   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:51.174544   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:51.188016   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:51.688043   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:51.765953   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:52.184570   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:52.270649   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:52.370063   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.709725399s)
	I0813 20:09:52.370102   14717 addons.go:313] Verifying addon metrics-server=true in "addons-20210813200856-13784"
	I0813 20:09:52.370129   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.193697048s)
	W0813 20:09:52.370153   14717 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 20:09:52.370190   14717 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 20:09:52.370327   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.51120555s)
	W0813 20:09:52.370353   14717 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 20:09:52.370361   14717 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 20:09:52.647114   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 20:09:52.683637   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:52.731496   14717 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:52.758886   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:53.167597   14717 pod_ready.go:102] pod "coredns-558bd4d5db-7npkg" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:53.174750   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:53.359148   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:53.681611   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:53.772818   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:54.370622   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:54.371489   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:54.672447   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:54.681120   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:55.165114   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.093276194s)
	I0813 20:09:55.165339   14717 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210813200856-13784"
	I0813 20:09:55.167571   14717 out.go:177] * Verifying csi-hostpath-driver addon...
	I0813 20:09:55.170399   14717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0813 20:09:55.173350   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:55.179630   14717 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0813 20:09:55.179654   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:55.266082   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:55.565652   14717 pod_ready.go:97] error getting pod "coredns-558bd4d5db-7npkg" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-7npkg" not found
	I0813 20:09:55.565695   14717 pod_ready.go:81] duration metric: took 9.05509751s waiting for pod "coredns-558bd4d5db-7npkg" in "kube-system" namespace to be "Ready" ...
	E0813 20:09:55.565709   14717 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-7npkg" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-7npkg" not found
	I0813 20:09:55.565723   14717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-v849t" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:55.674730   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:55.678341   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:55.777158   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:56.172020   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:56.266476   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:56.271843   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:56.683083   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:56.764500   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:56.779285   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:57.170687   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:57.175611   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:57.184101   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:57.664468   14717 pod_ready.go:102] pod "coredns-558bd4d5db-v849t" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:57.671534   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:57.675162   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:57.683778   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:57.802643   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.155483922s)
	I0813 20:09:57.802779   14717 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.07124537s)
	I0813 20:09:58.171550   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:58.175212   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:58.183969   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:58.672247   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:58.674875   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:58.684926   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:59.177547   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:59.180087   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:59.267334   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:59.671331   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:59.674393   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:59.683869   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:00.082037   14717 pod_ready.go:102] pod "coredns-558bd4d5db-v849t" in "kube-system" namespace has status "Ready":"False"
	I0813 20:10:00.171023   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:00.174673   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:00.184122   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:00.670715   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:00.675273   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:00.683905   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:01.170889   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:01.174308   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:01.183999   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:01.672334   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:01.674928   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:01.683888   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:02.083622   14717 pod_ready.go:102] pod "coredns-558bd4d5db-v849t" in "kube-system" namespace has status "Ready":"False"
	I0813 20:10:02.172230   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:02.175060   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:02.184206   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:02.671171   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:02.674613   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:02.684125   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:03.171973   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:03.176341   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:03.184315   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:03.670922   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:03.674612   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:03.686894   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:04.171921   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:04.174997   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:04.183871   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:04.582751   14717 pod_ready.go:102] pod "coredns-558bd4d5db-v849t" in "kube-system" namespace has status "Ready":"False"
	I0813 20:10:04.671078   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:04.674770   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:04.684032   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:05.170753   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:05.175196   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:05.183711   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:05.671601   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:05.674399   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:05.683740   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:06.171514   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:06.180439   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:06.183696   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:06.670665   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:06.675244   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:06.683789   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:07.083228   14717 pod_ready.go:102] pod "coredns-558bd4d5db-v849t" in "kube-system" namespace has status "Ready":"False"
	I0813 20:10:07.171154   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:07.174551   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:07.184331   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:07.671698   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:07.674782   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:07.684214   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:08.171673   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:08.175028   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:08.186947   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:08.671574   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:08.675241   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:08.684453   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:09.083732   14717 pod_ready.go:102] pod "coredns-558bd4d5db-v849t" in "kube-system" namespace has status "Ready":"False"
	I0813 20:10:09.170971   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:09.174453   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:09.184332   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:09.671228   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:09.675861   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:09.684379   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:10.171609   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:10.174559   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:10.184050   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:10.582165   14717 pod_ready.go:92] pod "coredns-558bd4d5db-v849t" in "kube-system" namespace has status "Ready":"True"
	I0813 20:10:10.582195   14717 pod_ready.go:81] duration metric: took 15.016463213s waiting for pod "coredns-558bd4d5db-v849t" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.582206   14717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210813200856-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.585777   14717 pod_ready.go:92] pod "etcd-addons-20210813200856-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:10:10.585800   14717 pod_ready.go:81] duration metric: took 3.586853ms waiting for pod "etcd-addons-20210813200856-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.585818   14717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210813200856-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.589311   14717 pod_ready.go:92] pod "kube-apiserver-addons-20210813200856-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:10:10.589325   14717 pod_ready.go:81] duration metric: took 3.497782ms waiting for pod "kube-apiserver-addons-20210813200856-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.589336   14717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210813200856-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.592515   14717 pod_ready.go:92] pod "kube-controller-manager-addons-20210813200856-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:10:10.592534   14717 pod_ready.go:81] duration metric: took 3.191065ms waiting for pod "kube-controller-manager-addons-20210813200856-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.592543   14717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gv6kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.595565   14717 pod_ready.go:92] pod "kube-proxy-gv6kb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:10:10.595580   14717 pod_ready.go:81] duration metric: took 3.030843ms waiting for pod "kube-proxy-gv6kb" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.595588   14717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210813200856-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.671715   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:10.674599   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:10.684448   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:10.980816   14717 pod_ready.go:92] pod "kube-scheduler-addons-20210813200856-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:10:10.980840   14717 pod_ready.go:81] duration metric: took 385.244539ms waiting for pod "kube-scheduler-addons-20210813200856-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:10:10.980856   14717 pod_ready.go:38] duration metric: took 24.491687239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:10:10.980881   14717 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:10:10.980928   14717 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:10:11.010121   14717 api_server.go:70] duration metric: took 24.726308209s to wait for apiserver process to appear ...
	I0813 20:10:11.010144   14717 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:10:11.010157   14717 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:10:11.014542   14717 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:10:11.015346   14717 api_server.go:139] control plane version: v1.21.3
	I0813 20:10:11.015374   14717 api_server.go:129] duration metric: took 5.222309ms to wait for apiserver health ...
	I0813 20:10:11.015385   14717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:10:11.171684   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:11.175016   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:11.185064   14717 system_pods.go:59] 19 kube-system pods found
	I0813 20:10:11.185092   14717 system_pods.go:61] "coredns-558bd4d5db-v849t" [c3009d78-9596-4a85-a64e-58f142a9e507] Running
	I0813 20:10:11.185101   14717 system_pods.go:61] "csi-hostpath-attacher-0" [9e8ab739-2911-4f3c-9c75-971497cb9c21] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0813 20:10:11.185108   14717 system_pods.go:61] "csi-hostpath-provisioner-0" [34d9ade8-8ccf-4747-bfde-3361833e9d8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0813 20:10:11.185116   14717 system_pods.go:61] "csi-hostpath-resizer-0" [9a7055c9-3248-4eb5-8efc-5d3e06a91afe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0813 20:10:11.185123   14717 system_pods.go:61] "csi-hostpath-snapshotter-0" [f9d3e4e4-a839-4203-924a-a9e724996440] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0813 20:10:11.185129   14717 system_pods.go:61] "csi-hostpathplugin-0" [c6d7bee7-484a-4ab7-a071-4bcba77bd9e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0813 20:10:11.185137   14717 system_pods.go:61] "etcd-addons-20210813200856-13784" [dc472b8b-74c4-47aa-ac77-bd9ec372dbee] Running
	I0813 20:10:11.185144   14717 system_pods.go:61] "kindnet-h8pp8" [7972a95e-914a-4996-b78c-97dfe217c83c] Running
	I0813 20:10:11.185150   14717 system_pods.go:61] "kube-apiserver-addons-20210813200856-13784" [34cd12df-0476-4560-8d00-64a8348be3b3] Running
	I0813 20:10:11.185154   14717 system_pods.go:61] "kube-controller-manager-addons-20210813200856-13784" [bb30730c-affb-498e-8943-c4f46ab1a99b] Running
	I0813 20:10:11.185158   14717 system_pods.go:61] "kube-proxy-gv6kb" [f87a3cfa-d892-49a6-83de-d1d67d2f86cc] Running
	I0813 20:10:11.185162   14717 system_pods.go:61] "kube-scheduler-addons-20210813200856-13784" [783ba2b7-008a-4aed-acd5-d35f76662818] Running
	I0813 20:10:11.185166   14717 system_pods.go:61] "metrics-server-77c99ccb96-wb8ql" [eb3d3448-dc22-4ab0-be69-5b5da787bf66] Running
	I0813 20:10:11.185170   14717 system_pods.go:61] "registry-hg76n" [0bb28729-59bb-42d5-ba36-ff47b8317260] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 20:10:11.185176   14717 system_pods.go:61] "registry-proxy-dx5mb" [0c04ef2d-b7fa-4313-8fa9-9bfea7d18a20] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 20:10:11.185183   14717 system_pods.go:61] "snapshot-controller-989f9ddc8-2swh2" [7c645af9-48f7-4190-b6a2-ecc2295fbdbf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:10:11.185190   14717 system_pods.go:61] "snapshot-controller-989f9ddc8-wj4fj" [d8c88429-78ad-4fb6-a2f8-aed7d15671be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:10:11.185198   14717 system_pods.go:61] "storage-provisioner" [37c88ab4-0829-4e6e-8bec-180be59e5c70] Running
	I0813 20:10:11.185203   14717 system_pods.go:61] "tiller-deploy-768d69497-6wvws" [4c7f47dd-cf8b-4dcb-b03e-a9cbf44ee64c] Running
	I0813 20:10:11.185210   14717 system_pods.go:74] duration metric: took 169.81928ms to wait for pod list to return data ...
	I0813 20:10:11.185217   14717 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:10:11.186535   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:11.380619   14717 default_sa.go:45] found service account: "default"
	I0813 20:10:11.380643   14717 default_sa.go:55] duration metric: took 195.419467ms for default service account to be created ...
	I0813 20:10:11.380652   14717 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:10:11.585398   14717 system_pods.go:86] 19 kube-system pods found
	I0813 20:10:11.585435   14717 system_pods.go:89] "coredns-558bd4d5db-v849t" [c3009d78-9596-4a85-a64e-58f142a9e507] Running
	I0813 20:10:11.585449   14717 system_pods.go:89] "csi-hostpath-attacher-0" [9e8ab739-2911-4f3c-9c75-971497cb9c21] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0813 20:10:11.585460   14717 system_pods.go:89] "csi-hostpath-provisioner-0" [34d9ade8-8ccf-4747-bfde-3361833e9d8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0813 20:10:11.585473   14717 system_pods.go:89] "csi-hostpath-resizer-0" [9a7055c9-3248-4eb5-8efc-5d3e06a91afe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0813 20:10:11.585509   14717 system_pods.go:89] "csi-hostpath-snapshotter-0" [f9d3e4e4-a839-4203-924a-a9e724996440] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0813 20:10:11.585541   14717 system_pods.go:89] "csi-hostpathplugin-0" [c6d7bee7-484a-4ab7-a071-4bcba77bd9e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0813 20:10:11.585555   14717 system_pods.go:89] "etcd-addons-20210813200856-13784" [dc472b8b-74c4-47aa-ac77-bd9ec372dbee] Running
	I0813 20:10:11.585568   14717 system_pods.go:89] "kindnet-h8pp8" [7972a95e-914a-4996-b78c-97dfe217c83c] Running
	I0813 20:10:11.585580   14717 system_pods.go:89] "kube-apiserver-addons-20210813200856-13784" [34cd12df-0476-4560-8d00-64a8348be3b3] Running
	I0813 20:10:11.585594   14717 system_pods.go:89] "kube-controller-manager-addons-20210813200856-13784" [bb30730c-affb-498e-8943-c4f46ab1a99b] Running
	I0813 20:10:11.585606   14717 system_pods.go:89] "kube-proxy-gv6kb" [f87a3cfa-d892-49a6-83de-d1d67d2f86cc] Running
	I0813 20:10:11.585619   14717 system_pods.go:89] "kube-scheduler-addons-20210813200856-13784" [783ba2b7-008a-4aed-acd5-d35f76662818] Running
	I0813 20:10:11.585629   14717 system_pods.go:89] "metrics-server-77c99ccb96-wb8ql" [eb3d3448-dc22-4ab0-be69-5b5da787bf66] Running
	I0813 20:10:11.585638   14717 system_pods.go:89] "registry-hg76n" [0bb28729-59bb-42d5-ba36-ff47b8317260] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 20:10:11.585650   14717 system_pods.go:89] "registry-proxy-dx5mb" [0c04ef2d-b7fa-4313-8fa9-9bfea7d18a20] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 20:10:11.585663   14717 system_pods.go:89] "snapshot-controller-989f9ddc8-2swh2" [7c645af9-48f7-4190-b6a2-ecc2295fbdbf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:10:11.585676   14717 system_pods.go:89] "snapshot-controller-989f9ddc8-wj4fj" [d8c88429-78ad-4fb6-a2f8-aed7d15671be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:10:11.585687   14717 system_pods.go:89] "storage-provisioner" [37c88ab4-0829-4e6e-8bec-180be59e5c70] Running
	I0813 20:10:11.585699   14717 system_pods.go:89] "tiller-deploy-768d69497-6wvws" [4c7f47dd-cf8b-4dcb-b03e-a9cbf44ee64c] Running
	I0813 20:10:11.585712   14717 system_pods.go:126] duration metric: took 205.054661ms to wait for k8s-apps to be running ...
	I0813 20:10:11.585725   14717 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:10:11.585774   14717 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:10:11.595413   14717 system_svc.go:56] duration metric: took 9.681821ms WaitForService to wait for kubelet.
	I0813 20:10:11.595439   14717 kubeadm.go:547] duration metric: took 25.311631509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:10:11.595461   14717 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:10:11.672073   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:11.674623   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:11.684345   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:11.781944   14717 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:10:11.781975   14717 node_conditions.go:123] node cpu capacity is 8
	I0813 20:10:11.781991   14717 node_conditions.go:105] duration metric: took 186.525563ms to run NodePressure ...
	I0813 20:10:11.782004   14717 start.go:231] waiting for startup goroutines ...
	I0813 20:10:12.171545   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:12.174599   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:12.184206   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:12.670968   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:12.674628   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:12.684328   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:13.171631   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:13.174828   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:13.184149   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:13.671270   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:13.674694   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:13.684231   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:14.171534   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:14.175506   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:14.185124   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:14.671644   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:14.674751   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:14.685642   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:15.170793   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:15.175220   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:15.183919   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:15.670448   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:15.675059   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:15.683617   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:16.171752   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:16.175177   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:16.183398   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:16.671656   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:16.674627   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:16.684451   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:17.171529   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:17.174565   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:17.184232   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:17.671080   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:17.674541   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:17.683941   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:18.170988   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:18.174641   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:18.184249   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:18.671294   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:18.674598   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:18.683984   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:19.171949   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:19.179503   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:19.185061   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:19.673446   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:19.758484   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:19.772552   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:20.171842   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:20.175568   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:20.184957   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:20.671988   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:20.684211   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:20.685112   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:21.172111   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:21.174755   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:21.184808   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:21.672217   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:21.674728   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:21.684835   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:22.172074   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:22.181321   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:22.184650   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:22.671838   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:22.675535   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:22.684414   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:23.171862   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:23.176200   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:23.183775   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:23.671505   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:23.674488   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:23.683804   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:24.170800   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:24.175336   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:24.183859   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:24.670245   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:24.674680   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:24.684043   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:25.179221   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:25.180363   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:25.184219   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:25.679202   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:25.679458   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:25.759168   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:26.171148   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:26.175058   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:26.184820   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:26.670660   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:26.675555   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:26.684175   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:27.170619   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:27.175567   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:27.183988   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:27.671270   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:27.675210   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:27.684189   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:28.170570   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:28.175225   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:28.183647   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:28.671067   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:28.674608   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:28.684676   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:29.171495   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:29.174850   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:29.184587   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:29.671347   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:29.674505   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:29.684165   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:30.170638   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:30.175301   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:30.183813   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:30.670542   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:30.675315   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:30.683825   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:31.171761   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:31.178829   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:31.186279   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:31.670698   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:31.679388   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:31.684115   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:32.170381   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:32.174938   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:32.184201   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:32.670550   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:32.675106   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:32.683365   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:33.170926   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:33.174666   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:33.184321   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:33.670708   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:33.675374   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:33.686390   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:34.171406   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:34.174902   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:34.184911   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:34.672267   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:34.674823   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:34.684606   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:35.172159   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:35.175422   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:35.214081   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:35.670684   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:35.675106   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:35.683950   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:36.170370   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:36.175178   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:36.183917   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:36.670669   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:36.674992   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:36.683297   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:37.170988   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:37.174619   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:37.184277   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:37.670381   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:37.674918   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:37.683053   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:38.170763   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:38.175408   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:38.184315   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:38.670944   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:38.675823   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:38.684520   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:39.171902   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:39.174936   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:39.185364   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:39.670292   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:39.674802   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:39.685853   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:40.170439   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:40.174982   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:40.183444   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:40.671247   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:40.674945   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:40.684570   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:41.171247   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:41.174965   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:41.184443   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:41.671141   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:41.674602   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:41.683921   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:42.170441   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:42.174971   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:42.184281   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:42.670970   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:42.675246   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:42.683557   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:43.170857   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:43.175305   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:43.183872   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:43.671595   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:43.674717   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:43.685281   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:44.171960   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:44.175404   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:44.184819   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:44.671125   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:44.674594   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:44.684143   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:45.170723   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:45.175648   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:45.184283   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:45.670613   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:45.675132   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:45.684292   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:46.170838   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:46.175394   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:46.185786   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:46.670550   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:46.674993   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:46.683407   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:47.170980   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:47.174394   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:47.184401   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:47.671098   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:47.674878   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:47.684638   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:48.171568   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:48.174873   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:48.186406   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:48.672021   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:48.677038   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:48.684801   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:49.170907   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:49.175453   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:49.184610   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:49.671022   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:49.674890   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:49.685544   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:50.171045   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:50.175401   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:50.184642   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:50.672308   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:50.674726   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:50.684603   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:51.171493   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:51.174830   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:51.185467   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:51.671064   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:51.675435   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:51.684122   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:52.170751   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:52.175884   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:52.185418   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:52.670390   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:52.676750   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:52.689875   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:53.171731   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:53.174933   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:53.184964   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:53.671515   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:53.674862   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:53.685349   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:54.630695   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:54.631783   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:54.633454   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:54.888565   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:54.888884   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:54.889579   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:55.170912   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:55.175650   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:55.184096   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:55.670623   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:55.675203   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:55.683765   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:56.171476   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:56.174752   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:56.184448   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:56.670744   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:56.675299   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:56.683686   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:57.171426   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:57.174333   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:57.183871   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:57.671342   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:57.674264   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:57.683755   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:58.171303   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:58.174773   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:58.184423   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:58.670889   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:58.674614   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:58.684210   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:59.170821   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:59.175418   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:59.183762   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:59.673980   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:59.674962   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:59.683376   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:00.171356   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:00.175090   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:11:00.188844   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:00.671311   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:00.675311   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:11:00.684810   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:01.171128   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:01.174824   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:11:01.184522   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:01.671007   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:01.674706   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:11:01.684467   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:02.170970   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:02.174836   14717 kapi.go:108] duration metric: took 1m12.188763126s to wait for kubernetes.io/minikube-addons=registry ...
	I0813 20:11:02.184395   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:02.670784   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:02.684114   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:03.170858   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:03.184436   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:03.670933   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:03.684530   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:04.171400   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:04.183779   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:04.671602   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:04.685390   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:05.368990   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:05.369707   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:05.672629   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:05.693839   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:06.173524   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:06.262089   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:06.672394   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:06.685291   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:07.170677   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:07.184373   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:07.670843   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:07.684643   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:08.170572   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:08.184571   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:08.671420   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:08.685244   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:09.171441   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:09.184861   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:09.671225   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:09.685341   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:10.171485   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:10.185139   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:10.671759   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:10.684022   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:11.171702   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:11.183884   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:11.670347   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:11.683816   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:12.181648   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:12.261957   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:12.671182   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:12.683916   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:13.171095   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:13.184856   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:13.671082   14717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:13.685012   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:14.170399   14717 kapi.go:108] duration metric: took 1m24.185676473s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0813 20:11:14.183978   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:14.684525   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:15.183974   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:15.684617   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:16.184498   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:16.685203   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:17.185986   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:17.689868   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:18.185180   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:18.684374   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:19.185932   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:19.684458   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:20.184206   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:20.684136   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:21.184247   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:21.684278   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:22.184492   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:22.684369   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:23.184599   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:23.684312   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:24.184312   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:24.685347   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:25.186031   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:25.684884   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:26.187158   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:26.683955   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:27.184242   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:27.684405   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:28.184559   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:28.684336   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:29.185166   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:29.686456   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:30.184782   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:30.684460   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:31.184401   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:31.684865   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:32.185540   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:32.684160   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:33.183701   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:33.684899   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:34.186211   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:34.684231   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:35.184515   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:35.683965   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:36.265847   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:36.765679   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:37.262550   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:37.762259   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:38.262374   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:38.686195   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:39.185538   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:39.684531   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:40.263510   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:40.762644   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:41.186218   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:41.863680   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:42.264210   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:42.760760   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:43.265261   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:43.763385   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:44.185581   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:44.762525   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:45.185807   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:45.684749   14717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:11:46.263417   14717 kapi.go:108] duration metric: took 1m51.093012526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0813 20:11:46.265210   14717 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, helm-tiller, metrics-server, olm, volumesnapshots, registry, ingress, csi-hostpath-driver
	I0813 20:11:46.265240   14717 addons.go:344] enableAddons completed in 1m59.981448518s
	I0813 20:11:46.319295   14717 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:11:46.617926   14717 out.go:177] * Done! kubectl is now configured to use "addons-20210813200856-13784" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:09:02 UTC, end at Fri 2021-08-13 20:17:54 UTC. --
	Aug 13 20:14:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:14:35.765679218Z" level=info msg="Removed pod sandbox: 5a165013c72082ec43ff9b0fbb8b93d16a0c2e5df301290d0acf6982ac2f6e51" id=0c94ad21-679d-4b6e-bcc7-7b9fec44d185 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 20:17:27 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:27.106038807Z" level=info msg="Stopping container: fa26036e71506b59bcedef3fca603d99722423cd2eb57aae15c04c0e6d84a518 (timeout: 29s)" id=4ecbcdc5-6630-4f9b-9c7a-c3cb13b7f7f4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.773672680Z" level=info msg="Removing container: 709e55dd3b9b7fe9e58b8a22d065adc9cdc09051b3e72c8cc3ca516262cf0c2b" id=06cba4ae-7c5d-481a-892f-cb0ab7e88582 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.811844089Z" level=info msg="Removed container 709e55dd3b9b7fe9e58b8a22d065adc9cdc09051b3e72c8cc3ca516262cf0c2b: ingress-nginx/ingress-nginx-admission-patch-fqll4/patch" id=06cba4ae-7c5d-481a-892f-cb0ab7e88582 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.812928600Z" level=info msg="Removing container: a3d7fa8c97e253335351f89d5f3b3e6ccfab516fcd4f812cf2075f5ac369cc73" id=02a05783-8c9a-4fbf-a093-9eb909a37eb7 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.859793322Z" level=info msg="Removed container a3d7fa8c97e253335351f89d5f3b3e6ccfab516fcd4f812cf2075f5ac369cc73: ingress-nginx/ingress-nginx-admission-create-6zkjg/create" id=02a05783-8c9a-4fbf-a093-9eb909a37eb7 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.861226886Z" level=info msg="Stopping pod sandbox: 4619bc19ba7bc39a33c94c48264e66e611b66de4998af3ec25ba53d305a3845d" id=3d908eff-2aa5-43d1-9d5d-6d8692764e25 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.861279557Z" level=info msg="Stopped pod sandbox (already stopped): 4619bc19ba7bc39a33c94c48264e66e611b66de4998af3ec25ba53d305a3845d" id=3d908eff-2aa5-43d1-9d5d-6d8692764e25 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.861547450Z" level=info msg="Removing pod sandbox: 4619bc19ba7bc39a33c94c48264e66e611b66de4998af3ec25ba53d305a3845d" id=ab7ddf4a-9841-4212-bb57-5f6c0e7fb522 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.989632433Z" level=info msg="Removed pod sandbox: 4619bc19ba7bc39a33c94c48264e66e611b66de4998af3ec25ba53d305a3845d" id=ab7ddf4a-9841-4212-bb57-5f6c0e7fb522 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.990133461Z" level=info msg="Stopping pod sandbox: 540e88c77d19d75abd00225d55c1530c42795c4fbee626948a19448b642e943d" id=6c82c1f8-b424-4f0e-9880-ca916ba335c5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.990174928Z" level=info msg="Stopped pod sandbox (already stopped): 540e88c77d19d75abd00225d55c1530c42795c4fbee626948a19448b642e943d" id=6c82c1f8-b424-4f0e-9880-ca916ba335c5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:35 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:35.990448815Z" level=info msg="Removing pod sandbox: 540e88c77d19d75abd00225d55c1530c42795c4fbee626948a19448b642e943d" id=c1ac4e1e-cdae-4fda-b89b-7e339536d8fd name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 20:17:36 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:36.105673778Z" level=info msg="Removed pod sandbox: 540e88c77d19d75abd00225d55c1530c42795c4fbee626948a19448b642e943d" id=c1ac4e1e-cdae-4fda-b89b-7e339536d8fd name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 20:17:37 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:37.296069809Z" level=info msg="Stopped container fa26036e71506b59bcedef3fca603d99722423cd2eb57aae15c04c0e6d84a518: ingress-nginx/ingress-nginx-controller-59b45fb494-c57mh/controller" id=4ecbcdc5-6630-4f9b-9c7a-c3cb13b7f7f4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 13 20:17:37 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:37.296524433Z" level=info msg="Stopping pod sandbox: b6e7095c1398a8b71efc9cb6f3f811d06d761b83cb16cc7fd89317c646e08cd8" id=e348243f-489d-4f3e-af2a-3e07f3163cdf name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:37 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:37.307578513Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-59b45fb494-c57mh Namespace:ingress-nginx ID:b6e7095c1398a8b71efc9cb6f3f811d06d761b83cb16cc7fd89317c646e08cd8 NetNS:/var/run/netns/a8eebe91-53bb-4308-8a13-bf48c3077ca1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 20:17:37 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:37.307723071Z" level=info msg="About to del CNI network kindnet (type=ptp)"
	Aug 13 20:17:37 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:37.476291681Z" level=info msg="Removing container: fa26036e71506b59bcedef3fca603d99722423cd2eb57aae15c04c0e6d84a518" id=18f7f283-9943-483d-96fd-7616be001b62 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:17:37 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:37.493561999Z" level=info msg="Removed container fa26036e71506b59bcedef3fca603d99722423cd2eb57aae15c04c0e6d84a518: ingress-nginx/ingress-nginx-controller-59b45fb494-c57mh/controller" id=18f7f283-9943-483d-96fd-7616be001b62 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:17:37 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:37.519974052Z" level=info msg="Stopped pod sandbox: b6e7095c1398a8b71efc9cb6f3f811d06d761b83cb16cc7fd89317c646e08cd8" id=e348243f-489d-4f3e-af2a-3e07f3163cdf name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:38 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:38.476708673Z" level=info msg="Stopping pod sandbox: b6e7095c1398a8b71efc9cb6f3f811d06d761b83cb16cc7fd89317c646e08cd8" id=95bd9e97-e501-4e53-94c1-7d9d4017d3f2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:38 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:38.476750228Z" level=info msg="Stopped pod sandbox (already stopped): b6e7095c1398a8b71efc9cb6f3f811d06d761b83cb16cc7fd89317c646e08cd8" id=95bd9e97-e501-4e53-94c1-7d9d4017d3f2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:39 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:39.478331306Z" level=info msg="Stopping pod sandbox: b6e7095c1398a8b71efc9cb6f3f811d06d761b83cb16cc7fd89317c646e08cd8" id=51b95780-8388-42ed-87b1-068588580783 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 20:17:39 addons-20210813200856-13784 crio[365]: time="2021-08-13 20:17:39.478395039Z" level=info msg="Stopped pod sandbox (already stopped): b6e7095c1398a8b71efc9cb6f3f811d06d761b83cb16cc7fd89317c646e08cd8" id=51b95780-8388-42ed-87b1-068588580783 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	ec3008e85748f       europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8   4 minutes ago       Running             private-image-eu          0                   b39f48b6dcc34
	d8457ac0be694       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                4 minutes ago       Running             etcd-restore-operator     0                   cb938f63209e5
	aea2eeb0e67bc       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                4 minutes ago       Running             etcd-backup-operator      0                   cb938f63209e5
	71976c7b65ce8       quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b                                            4 minutes ago       Running             etcd-operator             0                   cb938f63209e5
	e6a44ecf80e8b       docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce                                                 4 minutes ago       Running             nginx                     0                   43b52d3a8f7d2
	80c339715b932       us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8                5 minutes ago       Running             private-image             0                   83a6173d207ca
	2e72a5254069b       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                               5 minutes ago       Running             busybox                   0                   3befbc8d4fdad
	5dd9d8c91ad92       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             packageserver             0                   9e6082252deee
	41ce8944ae7f1       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0                 6 minutes ago       Running             registry-server           0                   b9f268a2990be
	054c2cb34842f       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             packageserver             0                   338e3b194f343
	8d216a85a22e0       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          7 minutes ago       Running             catalog-operator          0                   4944bcc1c3c30
	10a7b060375d8       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          7 minutes ago       Running             olm-operator              0                   631c57999d428
	f92bae33dea4c       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                                                                7 minutes ago       Running             coredns                   0                   3244e57ec7699
	7994700d56829       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                                8 minutes ago       Running             storage-provisioner       0                   96c21f387f4d5
	26f1ba4ca8bf3       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                                                                8 minutes ago       Running             kindnet-cni               0                   9a637a9f984c9
	c6a6ed9aa9060       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                                                                8 minutes ago       Running             kube-proxy                0                   d08d031edd565
	9acd483a078dc       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                                                                8 minutes ago       Running             etcd                      0                   885fc3668f0fa
	340de16263ed5       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                                                                8 minutes ago       Running             kube-apiserver            0                   438990faad1ce
	961f26cdb35db       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                                                                8 minutes ago       Running             kube-scheduler            0                   8b5b2a92f0ee4
	88caec66a2909       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                                                                8 minutes ago       Running             kube-controller-manager   0                   bb9cf1622908f
	
	* 
	* ==> coredns [f92bae33dea4c3237d32a1a5880448d719159539337fd6b3a543a88921ddc10a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210813200856-13784
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210813200856-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=addons-20210813200856-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_09_28_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210813200856-13784
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:09:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210813200856-13784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:17:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:13:33 +0000   Fri, 13 Aug 2021 20:09:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:13:33 +0000   Fri, 13 Aug 2021 20:09:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:13:33 +0000   Fri, 13 Aug 2021 20:09:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:13:33 +0000   Fri, 13 Aug 2021 20:09:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210813200856-13784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                8c66d169-8b38-45f9-8bde-14321b8771f9
	  Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  default                     private-image-7ff9c8c74f-fjb6v                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  default                     private-image-eu-5956d58f9f-h95lz                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 coredns-558bd4d5db-v849t                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m9s
	  kube-system                 etcd-addons-20210813200856-13784                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m22s
	  kube-system                 kindnet-h8pp8                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m10s
	  kube-system                 kube-apiserver-addons-20210813200856-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 kube-controller-manager-addons-20210813200856-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 kube-proxy-gv6kb                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-scheduler-addons-20210813200856-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  my-etcd                     etcd-operator-85cd4f54cd-qmnnj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  olm                         catalog-operator-75d496484d-b7xnt                      10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         8m3s
	  olm                         olm-operator-859c88c96-wg7bc                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m3s
	  olm                         operatorhubio-catalog-7njfz                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m30s
	  olm                         packageserver-765fb55d64-6j8sl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  olm                         packageserver-765fb55d64-6jlql                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                880m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             510Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m37s (x5 over 8m37s)  kubelet     Node addons-20210813200856-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s (x4 over 8m37s)  kubelet     Node addons-20210813200856-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s (x4 over 8m37s)  kubelet     Node addons-20210813200856-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m23s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m23s                  kubelet     Node addons-20210813200856-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s                  kubelet     Node addons-20210813200856-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s                  kubelet     Node addons-20210813200856-13784 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m13s                  kubelet     Node addons-20210813200856-13784 status is now: NodeReady
	  Normal  Starting                 8m8s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 91 18 be 90 8e 08 06        ......n.......
	[  +2.358713] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[  +8.195421] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[  +2.181021] IPv4: martian source 10.244.0.34 from 10.244.0.34, on dev veth6daf63df
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 52 81 68 cf 76 62 08 06        ......R.h.vb..
	[ +13.945833] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[Aug13 20:14] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[Aug13 20:15] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[  +1.026941] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[  +2.015840] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[  +4.063734] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[  +8.191412] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[ +16.126851] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	[Aug13 20:16] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 4e ab b8 96 7f 9b 1e 0a 89 bc 49 9b 08 00        N.........I...
	
	* 
	* ==> etcd [71976c7b65ce806b7df9a3b0ac618aec2b50ccb8f4b732735b8a6ca595ed49e6] <==
	* time="2021-08-13T20:13:06Z" level=info msg="etcd-operator Version: 0.9.4"
	time="2021-08-13T20:13:06Z" level=info msg="Git SHA: c8a1c64"
	time="2021-08-13T20:13:06Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:13:06Z" level=info msg="Go OS/Arch: linux/amd64"
	E0813 20:13:06.796682       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"4e3f2d2d-498a-40c5-bbce-0b923299c021", ResourceVersion:"1926", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482386, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-qmnnj\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:13:06Z\",\"renewTime\":\"2021-08-13T20:13:06Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-qmnnj became leader'
	
	* 
	* ==> etcd [9acd483a078dc13d61df7e5b582eff3c177deebb6319fcf011f526dea0b2cae2] <==
	* 2021-08-13 20:13:54.085247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:04.085797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:14.085445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:24.085229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:34.085808 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:44.085436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:54.085202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:04.084887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:14.085540 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:24.085302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:34.085556 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:44.085547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:54.085161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:04.084884 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:14.085259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:24.085612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:34.084762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:44.085014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:54.084932 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:04.085031 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:14.085344 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:24.085250 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:34.084727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:44.085013 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:54.085046 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [aea2eeb0e67bc7f41e1fcea557ff15a91022acf1e18a36d307b84335a7f91534] <==
	* time="2021-08-13T20:13:07Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:13:07Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-13T20:13:07Z" level=info msg="etcd-backup-operator Version: 0.9.4"
	time="2021-08-13T20:13:07Z" level=info msg="Git SHA: c8a1c64"
	E0813 20:13:07.034446       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"a86ba1b4-7a09-4382-8cb4-b9cd8d12c569", ResourceVersion:"1930", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482387, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-qmnnj\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:13:07Z\",\"renewTime\":\"2021-08-13T20:13:07Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-qmnnj became leader'
	time="2021-08-13T20:13:07Z" level=info msg="starting backup controller" pkg=controller
	
	* 
	* ==> etcd [d8457ac0be69436fc49c3a59f2a65d7f72e5b688e29b968c26662e0b05c4f59a] <==
	* time="2021-08-13T20:13:07Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:13:07Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-13T20:13:07Z" level=info msg="etcd-restore-operator Version: 0.9.4"
	time="2021-08-13T20:13:07Z" level=info msg="Git SHA: c8a1c64"
	E0813 20:13:07.271052       1 leaderelection.go:274] error initially creating leader election record: endpoints "etcd-restore-operator" already exists
	E0813 20:13:10.730043       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"d6349572-aedb-40a5-867a-9bcc37895ad8", ResourceVersion:"2002", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482387, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-qmnnj\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:13:10Z\",\"renewTime\":\"2021-08-13T20:13:10Z\",\"leaderTransitions\":1}", "endpoints.kubernetes.io/last-change-trigger-time":"2021-08-13T20:13:07Z"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-qmnnj became leader'
	time="2021-08-13T20:13:10Z" level=info msg="listening on 0.0.0.0:19999"
	time="2021-08-13T20:13:10Z" level=info msg="starting restore controller" pkg=controller
	
	* 
	* ==> kernel <==
	*  20:17:55 up  1:00,  0 users,  load average: 0.59, 0.80, 0.62
	Linux addons-20210813200856-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [340de16263ed54c8a91570112f9643829f1b077dd7fc9e1129f26f96b10bd20f] <==
	* W0813 20:13:40.764282       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0813 20:13:40.785365       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	I0813 20:13:44.778136       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:13:44.778172       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:13:44.778180       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:14:27.888051       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:14:27.888091       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:14:27.888099       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:15:07.131112       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:15:07.131151       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:15:07.131158       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:15:50.581940       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:15:50.581977       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:15:50.581984       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:16:30.766424       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:16:30.766461       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:16:30.766469       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:17:01.041192       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:17:01.041243       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:17:01.041254       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 20:17:26.106742       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0813 20:17:30.616599       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0813 20:17:43.206469       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:17:43.206517       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:17:43.206528       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [88caec66a2909a71caeda6b514c4a8b64a8bf9b26d6c856e5d2255f59d439eb1] <==
	* I0813 20:13:46.107738       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:13:46.312456       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0813 20:13:46.312503       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0813 20:13:48.184839       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:13:48.446787       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:13:49.893967       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:13:57.557079       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:00.275122       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:01.210240       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:16.517329       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:23.261285       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:25.278988       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:47.094019       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:14:54.311945       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:00.340787       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:20.906304       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:31.983000       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:15:53.245602       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:16:20.736569       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:16:20.775428       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:16:31.103761       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:17:15.531725       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:17:19.278451       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:17:24.278368       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 20:17:30.845923       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-wxdj4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [c6a6ed9aa9060772dae4e0584c1fcc7f7291f08e9b10406be41059f7f04ff425] <==
	* I0813 20:09:46.892189       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:09:46.892236       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:09:46.892266       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:09:47.371814       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:09:47.371860       1 server_others.go:212] Using iptables Proxier.
	I0813 20:09:47.371873       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:09:47.371887       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:09:47.372264       1 server.go:643] Version: v1.21.3
	I0813 20:09:47.376425       1 config.go:315] Starting service config controller
	I0813 20:09:47.376446       1 config.go:224] Starting endpoint slice config controller
	I0813 20:09:47.376453       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:09:47.376456       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:09:47.467506       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:09:47.479125       1 shared_informer.go:247] Caches are synced for service config 
	W0813 20:09:47.561974       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:09:47.658966       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [961f26cdb35db4503c69a8772e62b7cf5f720aaa25e5fac39afb07beae924af0] <==
	* W0813 20:09:24.762505       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:09:24.777198       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:09:24.777234       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:09:24.777526       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:09:24.777588       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:09:24.780535       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:24.780551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:09:24.780546       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:24.780605       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:24.780627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:09:24.780706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:09:24.780756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:24.780893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:09:24.781152       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:09:24.781747       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:09:24.781887       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:09:24.782023       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:09:24.782745       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:09:24.782771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:09:25.656938       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:25.745567       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:09:25.786688       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:09:25.826736       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:09:25.832549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0813 20:09:27.877697       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:09:02 UTC, end at Fri 2021-08-13 20:17:55 UTC. --
	Aug 13 20:17:33 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:33.494718    1567 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:17:35 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:35.772661    1567 scope.go:111] "RemoveContainer" containerID="709e55dd3b9b7fe9e58b8a22d065adc9cdc09051b3e72c8cc3ca516262cf0c2b"
	Aug 13 20:17:35 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:35.812067    1567 scope.go:111] "RemoveContainer" containerID="a3d7fa8c97e253335351f89d5f3b3e6ccfab516fcd4f812cf2075f5ac369cc73"
	Aug 13 20:17:37 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:37.473403    1567 scope.go:111] "RemoveContainer" containerID="fa26036e71506b59bcedef3fca603d99722423cd2eb57aae15c04c0e6d84a518"
	Aug 13 20:17:37 addons-20210813200856-13784 kubelet[1567]: W0813 20:17:37.785852    1567 container.go:586] Failed to update stats for container "/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d": /sys/fs/cgroup/cpuset/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/cpuset.cpus found to be empty, continuing to push stats
	Aug 13 20:17:38 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:38.636907    1567 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxqx7\" (UniqueName: \"kubernetes.io/projected/9b95d847-028e-4d6e-a35e-edf9ed3b0944-kube-api-access-wxqx7\") pod \"9b95d847-028e-4d6e-a35e-edf9ed3b0944\" (UID: \"9b95d847-028e-4d6e-a35e-edf9ed3b0944\") "
	Aug 13 20:17:38 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:38.636960    1567 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b95d847-028e-4d6e-a35e-edf9ed3b0944-webhook-cert\") pod \"9b95d847-028e-4d6e-a35e-edf9ed3b0944\" (UID: \"9b95d847-028e-4d6e-a35e-edf9ed3b0944\") "
	Aug 13 20:17:38 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:38.657894    1567 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b95d847-028e-4d6e-a35e-edf9ed3b0944-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9b95d847-028e-4d6e-a35e-edf9ed3b0944" (UID: "9b95d847-028e-4d6e-a35e-edf9ed3b0944"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 13 20:17:38 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:38.661870    1567 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b95d847-028e-4d6e-a35e-edf9ed3b0944-kube-api-access-wxqx7" (OuterVolumeSpecName: "kube-api-access-wxqx7") pod "9b95d847-028e-4d6e-a35e-edf9ed3b0944" (UID: "9b95d847-028e-4d6e-a35e-edf9ed3b0944"). InnerVolumeSpecName "kube-api-access-wxqx7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:17:38 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:38.738152    1567 reconciler.go:319] "Volume detached for volume \"kube-api-access-wxqx7\" (UniqueName: \"kubernetes.io/projected/9b95d847-028e-4d6e-a35e-edf9ed3b0944-kube-api-access-wxqx7\") on node \"addons-20210813200856-13784\" DevicePath \"\""
	Aug 13 20:17:38 addons-20210813200856-13784 kubelet[1567]: I0813 20:17:38.738271    1567 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b95d847-028e-4d6e-a35e-edf9ed3b0944-webhook-cert\") on node \"addons-20210813200856-13784\" DevicePath \"\""
	Aug 13 20:17:43 addons-20210813200856-13784 kubelet[1567]: W0813 20:17:43.592912    1567 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:17:43 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:43.606346    1567 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:17:43 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:43.630483    1567 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/ingress-nginx_ingress-nginx-admission-create-6zkjg_e56baacd-a202-4c5f-96eb-7b26dba4345c: no such file or directory" pod="ingress-nginx/ingress-nginx-admission-create-6zkjg"
	Aug 13 20:17:43 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:43.631766    1567 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/e56baacd-a202-4c5f-96eb-7b26dba4345c/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-create-6zkjg"
	Aug 13 20:17:43 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:43.671133    1567 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/9b95d847-028e-4d6e-a35e-edf9ed3b0944/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-c57mh"
	Aug 13 20:17:43 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:43.671246    1567 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-fqll4_bc4f7775-bd56-4f19-b691-0458200f9023: no such file or directory" pod="ingress-nginx/ingress-nginx-admission-patch-fqll4"
	Aug 13 20:17:43 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:43.672489    1567 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/bc4f7775-bd56-4f19-b691-0458200f9023/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-patch-fqll4"
	Aug 13 20:17:53 addons-20210813200856-13784 kubelet[1567]: W0813 20:17:53.689289    1567 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:17:53 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:53.720930    1567 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d/docker/d6667d21286d6d098e43e0158178f269a7e7b2e29946972b5df03e219a19476d\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:17:53 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:53.721162    1567 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/ingress-nginx_ingress-nginx-admission-create-6zkjg_e56baacd-a202-4c5f-96eb-7b26dba4345c: no such file or directory" pod="ingress-nginx/ingress-nginx-admission-create-6zkjg"
	Aug 13 20:17:53 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:53.722950    1567 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/e56baacd-a202-4c5f-96eb-7b26dba4345c/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-create-6zkjg"
	Aug 13 20:17:53 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:53.742597    1567 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-fqll4_bc4f7775-bd56-4f19-b691-0458200f9023: no such file or directory" pod="ingress-nginx/ingress-nginx-admission-patch-fqll4"
	Aug 13 20:17:53 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:53.743868    1567 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/bc4f7775-bd56-4f19-b691-0458200f9023/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-patch-fqll4"
	Aug 13 20:17:53 addons-20210813200856-13784 kubelet[1567]: E0813 20:17:53.772042    1567 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/9b95d847-028e-4d6e-a35e-edf9ed3b0944/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-c57mh"
	
	* 
	* ==> storage-provisioner [7994700d56829b2b6eea2b05122e2eb67f0b19e269eb9395dc48970eb0ee0861] <==
	* I0813 20:09:51.478181       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:09:51.575570       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:09:51.575729       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:09:51.773849       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:09:51.774031       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210813200856-13784_24790c6c-f473-49fb-bb2c-f4335e2867c0!
	I0813 20:09:51.777222       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa9b981f-a3ed-4b2b-9461-efd887b754d8", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210813200856-13784_24790c6c-f473-49fb-bb2c-f4335e2867c0 became leader
	I0813 20:09:51.876153       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210813200856-13784_24790c6c-f473-49fb-bb2c-f4335e2867c0!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210813200856-13784 -n addons-20210813200856-13784
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210813200856-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210813200856-13784 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210813200856-13784 describe pod : exit status 1 (49.028692ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210813200856-13784 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Ingress (315.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (7.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-7gjcw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-7gjcw -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-7gjcw -- sh -c "ping -c 1 192.168.49.1": exit status 1 (191.871368ms)

                                                
                                                
-- stdout --
	PING 192.168.49.1 (192.168.49.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.49.1) from pod (busybox-84b6686758-7gjcw): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-nhdx8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-nhdx8 -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-nhdx8 -- sh -c "ping -c 1 192.168.49.1": exit status 1 (1.43888463s)

                                                
                                                
-- stdout --
	PING 192.168.49.1 (192.168.49.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.49.1) from pod (busybox-84b6686758-nhdx8): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect multinode-20210813202501-13784
helpers_test.go:236: (dbg) docker inspect multinode-20210813202501-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3",
	        "Created": "2021-08-13T20:25:03.23418075Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 78859,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:25:03.682192414Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/hosts",
	        "LogPath": "/var/lib/docker/containers/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3-json.log",
	        "Name": "/multinode-20210813202501-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20210813202501-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20210813202501-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e575344e5393bde0baf90375cc185b53a3b24884af190533f022c670a49a7483-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e575344e5393bde0baf90375cc185b53a3b24884af190533f022c670a49a7483/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e575344e5393bde0baf90375cc185b53a3b24884af190533f022c670a49a7483/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e575344e5393bde0baf90375cc185b53a3b24884af190533f022c670a49a7483/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20210813202501-13784",
	                "Source": "/var/lib/docker/volumes/multinode-20210813202501-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20210813202501-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20210813202501-13784",
	                "name.minikube.sigs.k8s.io": "multinode-20210813202501-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "88b37ad6b44e6964477cd845d1c8bd62edc90524fee9e45c65a119edb72bbe1f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32806"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32804"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/88b37ad6b44e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20210813202501-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7c315ec5ab8c"
	                    ],
	                    "NetworkID": "bd314c598891b01434c666c2478e085185e1b5ab9063926d9047ef35b83deae1",
	                    "EndpointID": "88e7a45e73f19e7ef89da44ff0b3844d963cf6a7c44359d8cc8b2f6222a81e2f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210813202501-13784 -n multinode-20210813202501-13784
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202501-13784 logs -n 25: (4.454709635s)
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                Profile                 |   User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | json-output-20210813202213-13784       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:22:13 UTC | Fri, 13 Aug 2021 20:23:20 UTC |
	|         | json-output-20210813202213-13784                  |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	|         | --memory=2200 --wait=true                         |                                        |          |         |                               |                               |
	|         | --driver=docker                                   |                                        |          |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |          |         |                               |                               |
	| unpause | -p                                                | json-output-20210813202213-13784       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:23:22 UTC | Fri, 13 Aug 2021 20:23:23 UTC |
	|         | json-output-20210813202213-13784                  |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	| stop    | -p                                                | json-output-20210813202213-13784       | testUser | v1.22.0 | Fri, 13 Aug 2021 20:23:23 UTC | Fri, 13 Aug 2021 20:23:34 UTC |
	|         | json-output-20210813202213-13784                  |                                        |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	| delete  | -p                                                | json-output-20210813202213-13784       | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:23:34 UTC | Fri, 13 Aug 2021 20:23:40 UTC |
	|         | json-output-20210813202213-13784                  |                                        |          |         |                               |                               |
	| delete  | -p                                                | json-output-error-20210813202340-13784 | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:23:40 UTC | Fri, 13 Aug 2021 20:23:40 UTC |
	|         | json-output-error-20210813202340-13784            |                                        |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210813202340-13784    | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:23:40 UTC | Fri, 13 Aug 2021 20:24:07 UTC |
	|         | docker-network-20210813202340-13784               |                                        |          |         |                               |                               |
	|         | --network=                                        |                                        |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210813202340-13784    | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:07 UTC | Fri, 13 Aug 2021 20:24:09 UTC |
	|         | docker-network-20210813202340-13784               |                                        |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210813202409-13784    | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:09 UTC | Fri, 13 Aug 2021 20:24:33 UTC |
	|         | docker-network-20210813202409-13784               |                                        |          |         |                               |                               |
	|         | --network=bridge                                  |                                        |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210813202409-13784    | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:33 UTC | Fri, 13 Aug 2021 20:24:35 UTC |
	|         | docker-network-20210813202409-13784               |                                        |          |         |                               |                               |
	| start   | -p                                                | existing-network-20210813202435-13784  | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:35 UTC | Fri, 13 Aug 2021 20:24:59 UTC |
	|         | existing-network-20210813202435-13784             |                                        |          |         |                               |                               |
	|         | --network=existing-network                        |                                        |          |         |                               |                               |
	| delete  | -p                                                | existing-network-20210813202435-13784  | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:24:59 UTC | Fri, 13 Aug 2021 20:25:01 UTC |
	|         | existing-network-20210813202435-13784             |                                        |          |         |                               |                               |
	| start   | -p                                                | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:25:01 UTC | Fri, 13 Aug 2021 20:26:37 UTC |
	|         | multinode-20210813202501-13784                    |                                        |          |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                        |          |         |                               |                               |
	|         | --nodes=2 -v=8                                    |                                        |          |         |                               |                               |
	|         | --alsologtostderr                                 |                                        |          |         |                               |                               |
	|         | --driver=docker                                   |                                        |          |         |                               |                               |
	|         | --container-runtime=crio                          |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202501-13784 -- apply -f     | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:38 UTC | Fri, 13 Aug 2021 20:26:38 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                        |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:38 UTC | Fri, 13 Aug 2021 20:26:45 UTC |
	|         | multinode-20210813202501-13784                    |                                        |          |         |                               |                               |
	|         | -- rollout status                                 |                                        |          |         |                               |                               |
	|         | deployment/busybox                                |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202501-13784                 | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:45 UTC | Fri, 13 Aug 2021 20:26:45 UTC |
	|         | -- get pods -o                                    |                                        |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202501-13784                 | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:45 UTC | Fri, 13 Aug 2021 20:26:45 UTC |
	|         | -- get pods -o                                    |                                        |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                        |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:45 UTC | Fri, 13 Aug 2021 20:26:46 UTC |
	|         | multinode-20210813202501-13784                    |                                        |          |         |                               |                               |
	|         | -- exec                                           |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-7gjcw --                       |                                        |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                        |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:46 UTC | Fri, 13 Aug 2021 20:26:46 UTC |
	|         | multinode-20210813202501-13784                    |                                        |          |         |                               |                               |
	|         | -- exec                                           |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-nhdx8 --                       |                                        |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                        |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:46 UTC | Fri, 13 Aug 2021 20:26:46 UTC |
	|         | multinode-20210813202501-13784                    |                                        |          |         |                               |                               |
	|         | -- exec                                           |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-7gjcw --                       |                                        |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                        |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:46 UTC | Fri, 13 Aug 2021 20:26:46 UTC |
	|         | multinode-20210813202501-13784                    |                                        |          |         |                               |                               |
	|         | -- exec                                           |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-nhdx8 --                       |                                        |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202501-13784                 | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:46 UTC | Fri, 13 Aug 2021 20:26:47 UTC |
	|         | -- exec busybox-84b6686758-7gjcw                  |                                        |          |         |                               |                               |
	|         | -- nslookup                                       |                                        |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202501-13784                 | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:47 UTC | Fri, 13 Aug 2021 20:26:47 UTC |
	|         | -- exec busybox-84b6686758-nhdx8                  |                                        |          |         |                               |                               |
	|         | -- nslookup                                       |                                        |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                        |          |         |                               |                               |
	| kubectl | -p multinode-20210813202501-13784                 | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:47 UTC | Fri, 13 Aug 2021 20:26:47 UTC |
	|         | -- get pods -o                                    |                                        |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                        |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:47 UTC | Fri, 13 Aug 2021 20:26:47 UTC |
	|         | multinode-20210813202501-13784                    |                                        |          |         |                               |                               |
	|         | -- exec                                           |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-7gjcw                          |                                        |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                        |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                        |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                        |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813202501-13784         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 20:26:47 UTC | Fri, 13 Aug 2021 20:26:47 UTC |
	|         | multinode-20210813202501-13784                    |                                        |          |         |                               |                               |
	|         | -- exec                                           |                                        |          |         |                               |                               |
	|         | busybox-84b6686758-nhdx8                          |                                        |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                        |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                        |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                        |          |         |                               |                               |
	|---------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:25:01
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:25:01.680512   78216 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:25:01.680600   78216 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:25:01.680629   78216 out.go:311] Setting ErrFile to fd 2...
	I0813 20:25:01.680632   78216 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:25:01.680733   78216 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:25:01.680975   78216 out.go:305] Setting JSON to false
	I0813 20:25:01.715776   78216 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":4064,"bootTime":1628882237,"procs":166,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:25:01.715900   78216 start.go:121] virtualization: kvm guest
	I0813 20:25:01.718270   78216 out.go:177] * [multinode-20210813202501-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:25:01.719912   78216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:01.718403   78216 notify.go:169] Checking for updates...
	I0813 20:25:01.721426   78216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:25:01.723025   78216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:25:01.724442   78216 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:25:01.724619   78216 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:25:01.769250   78216 docker.go:132] docker version: linux-19.03.15
	I0813 20:25:01.769333   78216 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:25:01.851647   78216 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:25:01.809832043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:25:01.851760   78216 docker.go:244] overlay module found
	I0813 20:25:01.853900   78216 out.go:177] * Using the docker driver based on user configuration
	I0813 20:25:01.853927   78216 start.go:278] selected driver: docker
	I0813 20:25:01.853934   78216 start.go:751] validating driver "docker" against <nil>
	I0813 20:25:01.853954   78216 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:25:01.854007   78216 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:25:01.854027   78216 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:25:01.855512   78216 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:25:01.856340   78216 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:25:01.931094   78216 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:25:01.889937937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:25:01.931214   78216 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:25:01.931354   78216 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:25:01.931379   78216 cni.go:93] Creating CNI manager for ""
	I0813 20:25:01.931390   78216 cni.go:154] 0 nodes found, recommending kindnet
	I0813 20:25:01.931402   78216 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:25:01.931413   78216 start_flags.go:277] config:
	{Name:multinode-20210813202501-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202501-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:25:01.933527   78216 out.go:177] * Starting control plane node multinode-20210813202501-13784 in cluster multinode-20210813202501-13784
	I0813 20:25:01.933581   78216 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:25:01.934904   78216 out.go:177] * Pulling base image ...
	I0813 20:25:01.934934   78216 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:25:01.934975   78216 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:25:01.934991   78216 cache.go:56] Caching tarball of preloaded images
	I0813 20:25:01.935016   78216 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:25:01.935141   78216 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:25:01.935156   78216 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:25:01.935430   78216 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/config.json ...
	I0813 20:25:01.935465   78216 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/config.json: {Name:mk29324a62452d1f43d5e96aecef6d98cc766444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:02.018339   78216 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:25:02.018372   78216 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:25:02.018390   78216 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:25:02.018434   78216 start.go:313] acquiring machines lock for multinode-20210813202501-13784: {Name:mk0eb5e6be986085f22ad337b7131223a2768410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:25:02.018570   78216 start.go:317] acquired machines lock for "multinode-20210813202501-13784" in 115.36µs
	I0813 20:25:02.018604   78216 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813202501-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202501-13784 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:25:02.018720   78216 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:25:02.021016   78216 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:25:02.021246   78216 start.go:160] libmachine.API.Create for "multinode-20210813202501-13784" (driver="docker")
	I0813 20:25:02.021270   78216 client.go:168] LocalClient.Create starting
	I0813 20:25:02.021321   78216 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:25:02.021350   78216 main.go:130] libmachine: Decoding PEM data...
	I0813 20:25:02.021367   78216 main.go:130] libmachine: Parsing certificate...
	I0813 20:25:02.021469   78216 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:25:02.021506   78216 main.go:130] libmachine: Decoding PEM data...
	I0813 20:25:02.021524   78216 main.go:130] libmachine: Parsing certificate...
	I0813 20:25:02.021845   78216 cli_runner.go:115] Run: docker network inspect multinode-20210813202501-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:25:02.057221   78216 cli_runner.go:162] docker network inspect multinode-20210813202501-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:25:02.057292   78216 network_create.go:255] running [docker network inspect multinode-20210813202501-13784] to gather additional debugging logs...
	I0813 20:25:02.057321   78216 cli_runner.go:115] Run: docker network inspect multinode-20210813202501-13784
	W0813 20:25:02.092181   78216 cli_runner.go:162] docker network inspect multinode-20210813202501-13784 returned with exit code 1
	I0813 20:25:02.092212   78216 network_create.go:258] error running [docker network inspect multinode-20210813202501-13784]: docker network inspect multinode-20210813202501-13784: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20210813202501-13784
	I0813 20:25:02.092240   78216 network_create.go:260] output of [docker network inspect multinode-20210813202501-13784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20210813202501-13784
	
	** /stderr **
	I0813 20:25:02.092283   78216 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:25:02.127857   78216 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006943c8] misses:0}
	I0813 20:25:02.127921   78216 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:25:02.127938   78216 network_create.go:106] attempt to create docker network multinode-20210813202501-13784 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 20:25:02.127983   78216 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20210813202501-13784
	I0813 20:25:02.195310   78216 network_create.go:90] docker network multinode-20210813202501-13784 192.168.49.0/24 created
	I0813 20:25:02.195340   78216 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20210813202501-13784" container
	I0813 20:25:02.195395   78216 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:25:02.230746   78216 cli_runner.go:115] Run: docker volume create multinode-20210813202501-13784 --label name.minikube.sigs.k8s.io=multinode-20210813202501-13784 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:25:02.267360   78216 oci.go:102] Successfully created a docker volume multinode-20210813202501-13784
	I0813 20:25:02.267427   78216 cli_runner.go:115] Run: docker run --rm --name multinode-20210813202501-13784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210813202501-13784 --entrypoint /usr/bin/test -v multinode-20210813202501-13784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:25:03.112559   78216 oci.go:106] Successfully prepared a docker volume multinode-20210813202501-13784
	W0813 20:25:03.112612   78216 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:25:03.112619   78216 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:25:03.112666   78216 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:25:03.112692   78216 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:25:03.112725   78216 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:25:03.112803   78216 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210813202501-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:25:03.191064   78216 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210813202501-13784 --name multinode-20210813202501-13784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210813202501-13784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210813202501-13784 --network multinode-20210813202501-13784 --ip 192.168.49.2 --volume multinode-20210813202501-13784:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:25:03.690242   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Running}}
	I0813 20:25:03.734665   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Status}}
	I0813 20:25:03.776337   78216 cli_runner.go:115] Run: docker exec multinode-20210813202501-13784 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:25:03.908697   78216 oci.go:278] the created container "multinode-20210813202501-13784" has a running status.
	I0813 20:25:03.908733   78216 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa...
	I0813 20:25:04.163471   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0813 20:25:04.163519   78216 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:25:04.547698   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Status}}
	I0813 20:25:04.587367   78216 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:25:04.587392   78216 kic_runner.go:115] Args: [docker exec --privileged multinode-20210813202501-13784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:25:06.549594   78216 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210813202501-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.436692796s)
	I0813 20:25:06.549626   78216 kic.go:188] duration metric: took 3.436899 seconds to extract preloaded images to volume
	I0813 20:25:06.549724   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Status}}
	I0813 20:25:06.587037   78216 machine.go:88] provisioning docker machine ...
	I0813 20:25:06.587081   78216 ubuntu.go:169] provisioning hostname "multinode-20210813202501-13784"
	I0813 20:25:06.587141   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:06.623181   78216 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:06.623448   78216 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0813 20:25:06.623468   78216 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813202501-13784 && echo "multinode-20210813202501-13784" | sudo tee /etc/hostname
	I0813 20:25:06.753611   78216 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813202501-13784
	
	I0813 20:25:06.753723   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:06.790465   78216 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:06.790622   78216 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0813 20:25:06.790642   78216 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813202501-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813202501-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813202501-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:25:06.912900   78216 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:25:06.912933   78216 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:25:06.912957   78216 ubuntu.go:177] setting up certificates
	I0813 20:25:06.912977   78216 provision.go:83] configureAuth start
	I0813 20:25:06.913036   78216 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202501-13784
	I0813 20:25:06.949807   78216 provision.go:138] copyHostCerts
	I0813 20:25:06.949850   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:25:06.949879   78216 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:25:06.949890   78216 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:25:06.949955   78216 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:25:06.950034   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:25:06.950057   78216 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:25:06.950067   78216 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:25:06.950093   78216 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:25:06.950138   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:25:06.950158   78216 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:25:06.950166   78216 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:25:06.950184   78216 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:25:06.950228   78216 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813202501-13784 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210813202501-13784]
	I0813 20:25:07.004632   78216 provision.go:172] copyRemoteCerts
	I0813 20:25:07.004694   78216 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:25:07.004743   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:07.042841   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:25:07.132404   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 20:25:07.132456   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:25:07.148275   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 20:25:07.148322   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 20:25:07.163272   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 20:25:07.163329   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:25:07.178441   78216 provision.go:86] duration metric: configureAuth took 265.451792ms
	I0813 20:25:07.178463   78216 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:25:07.178602   78216 config.go:177] Loaded profile config "multinode-20210813202501-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:25:07.178756   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:07.215567   78216 main.go:130] libmachine: Using SSH client type: native
	I0813 20:25:07.215750   78216 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0813 20:25:07.215777   78216 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:25:07.572895   78216 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:25:07.572928   78216 machine.go:91] provisioned docker machine in 985.862558ms
	I0813 20:25:07.572939   78216 client.go:171] LocalClient.Create took 5.551663928s
	I0813 20:25:07.572950   78216 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813202501-13784" took 5.551704983s
	I0813 20:25:07.572959   78216 start.go:267] post-start starting for "multinode-20210813202501-13784" (driver="docker")
	I0813 20:25:07.572966   78216 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:25:07.573042   78216 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:25:07.573093   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:07.609802   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:25:07.696565   78216 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:25:07.699025   78216 command_runner.go:124] > NAME="Ubuntu"
	I0813 20:25:07.699045   78216 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0813 20:25:07.699049   78216 command_runner.go:124] > ID=ubuntu
	I0813 20:25:07.699057   78216 command_runner.go:124] > ID_LIKE=debian
	I0813 20:25:07.699062   78216 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0813 20:25:07.699066   78216 command_runner.go:124] > VERSION_ID="20.04"
	I0813 20:25:07.699072   78216 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0813 20:25:07.699077   78216 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0813 20:25:07.699082   78216 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0813 20:25:07.699093   78216 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0813 20:25:07.699099   78216 command_runner.go:124] > VERSION_CODENAME=focal
	I0813 20:25:07.699104   78216 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0813 20:25:07.699171   78216 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:25:07.699187   78216 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:25:07.699194   78216 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:25:07.699203   78216 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:25:07.699212   78216 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:25:07.699257   78216 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:25:07.699346   78216 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:25:07.699356   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> /etc/ssl/certs/137842.pem
	I0813 20:25:07.699502   78216 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:25:07.705761   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:25:07.721456   78216 start.go:270] post-start completed in 148.483418ms
	I0813 20:25:07.721813   78216 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202501-13784
	I0813 20:25:07.758827   78216 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/config.json ...
	I0813 20:25:07.759034   78216 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:25:07.759076   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:07.796708   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:25:07.881649   78216 command_runner.go:124] > 34%!
	(MISSING)I0813 20:25:07.881686   78216 start.go:129] duration metric: createHost completed in 5.862957266s
	I0813 20:25:07.881697   78216 start.go:80] releasing machines lock for "multinode-20210813202501-13784", held for 5.863115958s
	I0813 20:25:07.881827   78216 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202501-13784
	I0813 20:25:07.918901   78216 ssh_runner.go:149] Run: systemctl --version
	I0813 20:25:07.918913   78216 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:25:07.918961   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:07.918976   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:07.956288   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:25:07.957353   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:25:08.041205   78216 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.11)
	I0813 20:25:08.041250   78216 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0813 20:25:08.041397   78216 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:25:08.192136   78216 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 20:25:08.192167   78216 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 20:25:08.192173   78216 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 20:25:08.192177   78216 command_runner.go:124] > The document has moved
	I0813 20:25:08.192184   78216 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 20:25:08.192190   78216 command_runner.go:124] > </BODY></HTML>
	I0813 20:25:08.192281   78216 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:25:08.201527   78216 docker.go:153] disabling docker service ...
	I0813 20:25:08.201572   78216 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:25:08.210954   78216 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:25:08.218928   78216 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:25:08.227275   78216 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 20:25:08.280261   78216 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:25:08.343943   78216 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 20:25:08.344033   78216 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:25:08.352611   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:25:08.363307   78216 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:25:08.363330   78216 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:25:08.363907   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:25:08.371199   78216 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:25:08.371227   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:25:08.378729   78216 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:25:08.384350   78216 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:25:08.384384   78216 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:25:08.384419   78216 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:25:08.390719   78216 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:25:08.396345   78216 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:25:08.450415   78216 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:25:08.458872   78216 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:25:08.458934   78216 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:25:08.461791   78216 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 20:25:08.461813   78216 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 20:25:08.461823   78216 command_runner.go:124] > Device: 35h/53d	Inode: 689044      Links: 1
	I0813 20:25:08.461834   78216 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:25:08.461842   78216 command_runner.go:124] > Access: 2021-08-13 20:25:07.559426277 +0000
	I0813 20:25:08.461853   78216 command_runner.go:124] > Modify: 2021-08-13 20:25:07.559426277 +0000
	I0813 20:25:08.461867   78216 command_runner.go:124] > Change: 2021-08-13 20:25:07.559426277 +0000
	I0813 20:25:08.461874   78216 command_runner.go:124] >  Birth: -
	I0813 20:25:08.461904   78216 start.go:413] Will wait 60s for crictl version
	I0813 20:25:08.461945   78216 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:25:08.486545   78216 command_runner.go:124] > Version:  0.1.0
	I0813 20:25:08.486567   78216 command_runner.go:124] > RuntimeName:  cri-o
	I0813 20:25:08.486574   78216 command_runner.go:124] > RuntimeVersion:  1.20.3
	I0813 20:25:08.486583   78216 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 20:25:08.487974   78216 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:25:08.488042   78216 ssh_runner.go:149] Run: crio --version
	I0813 20:25:08.546256   78216 command_runner.go:124] ! time="2021-08-13T20:25:08Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 20:25:08.547700   78216 command_runner.go:124] > crio version 1.20.3
	I0813 20:25:08.547715   78216 command_runner.go:124] > Version:       1.20.3
	I0813 20:25:08.547722   78216 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0813 20:25:08.547728   78216 command_runner.go:124] > GitTreeState:  clean
	I0813 20:25:08.547741   78216 command_runner.go:124] > BuildDate:     2021-07-14T23:38:00Z
	I0813 20:25:08.547754   78216 command_runner.go:124] > GoVersion:     go1.15.2
	I0813 20:25:08.547760   78216 command_runner.go:124] > Compiler:      gc
	I0813 20:25:08.547771   78216 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:25:08.547775   78216 command_runner.go:124] > Linkmode:      dynamic
	I0813 20:25:08.547858   78216 ssh_runner.go:149] Run: crio --version
	I0813 20:25:08.603237   78216 command_runner.go:124] > crio version 1.20.3
	I0813 20:25:08.603259   78216 command_runner.go:124] > Version:       1.20.3
	I0813 20:25:08.603266   78216 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0813 20:25:08.603271   78216 command_runner.go:124] > GitTreeState:  clean
	I0813 20:25:08.603276   78216 command_runner.go:124] > BuildDate:     2021-07-14T23:38:00Z
	I0813 20:25:08.603281   78216 command_runner.go:124] > GoVersion:     go1.15.2
	I0813 20:25:08.603285   78216 command_runner.go:124] > Compiler:      gc
	I0813 20:25:08.603290   78216 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:25:08.603295   78216 command_runner.go:124] > Linkmode:      dynamic
	I0813 20:25:08.604404   78216 command_runner.go:124] ! time="2021-08-13T20:25:08Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 20:25:08.607254   78216 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:25:08.607321   78216 cli_runner.go:115] Run: docker network inspect multinode-20210813202501-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:25:08.643136   78216 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:25:08.646377   78216 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:25:08.655045   78216 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:25:08.655096   78216 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:25:08.696712   78216 command_runner.go:124] > {
	I0813 20:25:08.696735   78216 command_runner.go:124] >   "images": [
	I0813 20:25:08.696740   78216 command_runner.go:124] >     {
	I0813 20:25:08.696750   78216 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 20:25:08.696755   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.696762   78216 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 20:25:08.696765   78216 command_runner.go:124] >       ],
	I0813 20:25:08.696770   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.696779   78216 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 20:25:08.696787   78216 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 20:25:08.696793   78216 command_runner.go:124] >       ],
	I0813 20:25:08.696797   78216 command_runner.go:124] >       "size": "119984626",
	I0813 20:25:08.696805   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.696810   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.696818   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.696822   78216 command_runner.go:124] >     },
	I0813 20:25:08.696826   78216 command_runner.go:124] >     {
	I0813 20:25:08.696832   78216 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 20:25:08.696839   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.696844   78216 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 20:25:08.696850   78216 command_runner.go:124] >       ],
	I0813 20:25:08.696854   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.696869   78216 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 20:25:08.696880   78216 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 20:25:08.696883   78216 command_runner.go:124] >       ],
	I0813 20:25:08.696888   78216 command_runner.go:124] >       "size": "228528983",
	I0813 20:25:08.696892   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.696896   78216 command_runner.go:124] >       "username": "nonroot",
	I0813 20:25:08.696904   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.696911   78216 command_runner.go:124] >     },
	I0813 20:25:08.696916   78216 command_runner.go:124] >     {
	I0813 20:25:08.696929   78216 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 20:25:08.696938   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.696946   78216 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 20:25:08.696955   78216 command_runner.go:124] >       ],
	I0813 20:25:08.696960   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.696976   78216 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 20:25:08.696991   78216 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 20:25:08.696995   78216 command_runner.go:124] >       ],
	I0813 20:25:08.696999   78216 command_runner.go:124] >       "size": "36950651",
	I0813 20:25:08.697003   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.697007   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697011   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697014   78216 command_runner.go:124] >     },
	I0813 20:25:08.697018   78216 command_runner.go:124] >     {
	I0813 20:25:08.697024   78216 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 20:25:08.697031   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.697036   78216 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 20:25:08.697043   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697048   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.697058   78216 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 20:25:08.697069   78216 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 20:25:08.697074   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697079   78216 command_runner.go:124] >       "size": "31470524",
	I0813 20:25:08.697088   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.697094   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697098   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697104   78216 command_runner.go:124] >     },
	I0813 20:25:08.697108   78216 command_runner.go:124] >     {
	I0813 20:25:08.697114   78216 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 20:25:08.697120   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.697128   78216 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 20:25:08.697134   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697138   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.697148   78216 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 20:25:08.697158   78216 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 20:25:08.697164   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697168   78216 command_runner.go:124] >       "size": "42585056",
	I0813 20:25:08.697174   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.697178   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697184   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697188   78216 command_runner.go:124] >     },
	I0813 20:25:08.697195   78216 command_runner.go:124] >     {
	I0813 20:25:08.697201   78216 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 20:25:08.697208   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.697213   78216 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 20:25:08.697220   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697228   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.697236   78216 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 20:25:08.697245   78216 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 20:25:08.697251   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697255   78216 command_runner.go:124] >       "size": "254662613",
	I0813 20:25:08.697261   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.697265   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697271   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697275   78216 command_runner.go:124] >     },
	I0813 20:25:08.697280   78216 command_runner.go:124] >     {
	I0813 20:25:08.697287   78216 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 20:25:08.697293   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.697298   78216 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 20:25:08.697303   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697307   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.697317   78216 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 20:25:08.697327   78216 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 20:25:08.697332   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697336   78216 command_runner.go:124] >       "size": "126878961",
	I0813 20:25:08.697342   78216 command_runner.go:124] >       "uid": {
	I0813 20:25:08.697346   78216 command_runner.go:124] >         "value": "0"
	I0813 20:25:08.697352   78216 command_runner.go:124] >       },
	I0813 20:25:08.697356   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697364   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697368   78216 command_runner.go:124] >     },
	I0813 20:25:08.697374   78216 command_runner.go:124] >     {
	I0813 20:25:08.697380   78216 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 20:25:08.697388   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.697397   78216 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 20:25:08.697402   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697407   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.697417   78216 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 20:25:08.697427   78216 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 20:25:08.697433   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697444   78216 command_runner.go:124] >       "size": "121087578",
	I0813 20:25:08.697450   78216 command_runner.go:124] >       "uid": {
	I0813 20:25:08.697454   78216 command_runner.go:124] >         "value": "0"
	I0813 20:25:08.697457   78216 command_runner.go:124] >       },
	I0813 20:25:08.697497   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697508   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697514   78216 command_runner.go:124] >     },
	I0813 20:25:08.697520   78216 command_runner.go:124] >     {
	I0813 20:25:08.697527   78216 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 20:25:08.697533   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.697538   78216 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 20:25:08.697544   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697548   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.697557   78216 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 20:25:08.697569   78216 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 20:25:08.697575   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697580   78216 command_runner.go:124] >       "size": "105129702",
	I0813 20:25:08.697586   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.697590   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697596   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697600   78216 command_runner.go:124] >     },
	I0813 20:25:08.697605   78216 command_runner.go:124] >     {
	I0813 20:25:08.697612   78216 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 20:25:08.697618   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.697624   78216 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 20:25:08.697629   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697633   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.697643   78216 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 20:25:08.697654   78216 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 20:25:08.697660   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697664   78216 command_runner.go:124] >       "size": "51893338",
	I0813 20:25:08.697670   78216 command_runner.go:124] >       "uid": {
	I0813 20:25:08.697674   78216 command_runner.go:124] >         "value": "0"
	I0813 20:25:08.697679   78216 command_runner.go:124] >       },
	I0813 20:25:08.697684   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697690   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697693   78216 command_runner.go:124] >     },
	I0813 20:25:08.697696   78216 command_runner.go:124] >     {
	I0813 20:25:08.697703   78216 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 20:25:08.697709   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.697713   78216 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 20:25:08.697721   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697725   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.697734   78216 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 20:25:08.697745   78216 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 20:25:08.697752   78216 command_runner.go:124] >       ],
	I0813 20:25:08.697756   78216 command_runner.go:124] >       "size": "689817",
	I0813 20:25:08.697762   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.697766   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.697770   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.697773   78216 command_runner.go:124] >     }
	I0813 20:25:08.697776   78216 command_runner.go:124] >   ]
	I0813 20:25:08.697779   78216 command_runner.go:124] > }
	I0813 20:25:08.697942   78216 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:25:08.697952   78216 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:25:08.697986   78216 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:25:08.717874   78216 command_runner.go:124] > {
	I0813 20:25:08.717899   78216 command_runner.go:124] >   "images": [
	I0813 20:25:08.717905   78216 command_runner.go:124] >     {
	I0813 20:25:08.717917   78216 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 20:25:08.717925   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.717935   78216 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 20:25:08.717943   78216 command_runner.go:124] >       ],
	I0813 20:25:08.717947   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.717963   78216 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 20:25:08.717980   78216 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 20:25:08.717990   78216 command_runner.go:124] >       ],
	I0813 20:25:08.717997   78216 command_runner.go:124] >       "size": "119984626",
	I0813 20:25:08.718007   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.718016   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.718025   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718031   78216 command_runner.go:124] >     },
	I0813 20:25:08.718035   78216 command_runner.go:124] >     {
	I0813 20:25:08.718044   78216 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 20:25:08.718050   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718055   78216 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 20:25:08.718061   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718065   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718079   78216 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 20:25:08.718089   78216 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 20:25:08.718095   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718099   78216 command_runner.go:124] >       "size": "228528983",
	I0813 20:25:08.718105   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.718110   78216 command_runner.go:124] >       "username": "nonroot",
	I0813 20:25:08.718121   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718127   78216 command_runner.go:124] >     },
	I0813 20:25:08.718130   78216 command_runner.go:124] >     {
	I0813 20:25:08.718137   78216 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 20:25:08.718143   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718149   78216 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 20:25:08.718156   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718164   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718172   78216 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 20:25:08.718183   78216 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 20:25:08.718188   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718193   78216 command_runner.go:124] >       "size": "36950651",
	I0813 20:25:08.718199   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.718203   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.718209   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718212   78216 command_runner.go:124] >     },
	I0813 20:25:08.718217   78216 command_runner.go:124] >     {
	I0813 20:25:08.718224   78216 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 20:25:08.718230   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718236   78216 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 20:25:08.718241   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718245   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718255   78216 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 20:25:08.718263   78216 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 20:25:08.718269   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718274   78216 command_runner.go:124] >       "size": "31470524",
	I0813 20:25:08.718288   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.718294   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.718298   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718301   78216 command_runner.go:124] >     },
	I0813 20:25:08.718309   78216 command_runner.go:124] >     {
	I0813 20:25:08.718315   78216 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 20:25:08.718322   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718330   78216 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 20:25:08.718334   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718338   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718345   78216 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 20:25:08.718356   78216 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 20:25:08.718363   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718368   78216 command_runner.go:124] >       "size": "42585056",
	I0813 20:25:08.718375   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.718379   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.718386   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718389   78216 command_runner.go:124] >     },
	I0813 20:25:08.718395   78216 command_runner.go:124] >     {
	I0813 20:25:08.718402   78216 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 20:25:08.718408   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718412   78216 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 20:25:08.718416   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718420   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718430   78216 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 20:25:08.718439   78216 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 20:25:08.718447   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718457   78216 command_runner.go:124] >       "size": "254662613",
	I0813 20:25:08.718465   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.718471   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.718481   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718489   78216 command_runner.go:124] >     },
	I0813 20:25:08.718494   78216 command_runner.go:124] >     {
	I0813 20:25:08.718504   78216 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 20:25:08.718510   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718515   78216 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 20:25:08.718520   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718524   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718537   78216 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 20:25:08.718560   78216 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 20:25:08.718570   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718575   78216 command_runner.go:124] >       "size": "126878961",
	I0813 20:25:08.718579   78216 command_runner.go:124] >       "uid": {
	I0813 20:25:08.718583   78216 command_runner.go:124] >         "value": "0"
	I0813 20:25:08.718589   78216 command_runner.go:124] >       },
	I0813 20:25:08.718593   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.718602   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718607   78216 command_runner.go:124] >     },
	I0813 20:25:08.718611   78216 command_runner.go:124] >     {
	I0813 20:25:08.718620   78216 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 20:25:08.718629   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718641   78216 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 20:25:08.718646   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718656   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718671   78216 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 20:25:08.718682   78216 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 20:25:08.718687   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718718   78216 command_runner.go:124] >       "size": "121087578",
	I0813 20:25:08.718728   78216 command_runner.go:124] >       "uid": {
	I0813 20:25:08.718738   78216 command_runner.go:124] >         "value": "0"
	I0813 20:25:08.718743   78216 command_runner.go:124] >       },
	I0813 20:25:08.718759   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.718769   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718773   78216 command_runner.go:124] >     },
	I0813 20:25:08.718778   78216 command_runner.go:124] >     {
	I0813 20:25:08.718789   78216 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 20:25:08.718798   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718808   78216 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 20:25:08.718813   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718826   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718842   78216 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 20:25:08.718857   78216 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 20:25:08.718865   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718872   78216 command_runner.go:124] >       "size": "105129702",
	I0813 20:25:08.718880   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.718887   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.718896   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.718902   78216 command_runner.go:124] >     },
	I0813 20:25:08.718910   78216 command_runner.go:124] >     {
	I0813 20:25:08.718921   78216 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 20:25:08.718929   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.718937   78216 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 20:25:08.718944   78216 command_runner.go:124] >       ],
	I0813 20:25:08.718951   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.718966   78216 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 20:25:08.718987   78216 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 20:25:08.718995   78216 command_runner.go:124] >       ],
	I0813 20:25:08.719002   78216 command_runner.go:124] >       "size": "51893338",
	I0813 20:25:08.719011   78216 command_runner.go:124] >       "uid": {
	I0813 20:25:08.719018   78216 command_runner.go:124] >         "value": "0"
	I0813 20:25:08.719026   78216 command_runner.go:124] >       },
	I0813 20:25:08.719032   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.719041   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.719048   78216 command_runner.go:124] >     },
	I0813 20:25:08.719054   78216 command_runner.go:124] >     {
	I0813 20:25:08.719066   78216 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 20:25:08.719075   78216 command_runner.go:124] >       "repoTags": [
	I0813 20:25:08.719083   78216 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 20:25:08.719092   78216 command_runner.go:124] >       ],
	I0813 20:25:08.719103   78216 command_runner.go:124] >       "repoDigests": [
	I0813 20:25:08.719117   78216 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 20:25:08.719131   78216 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 20:25:08.719139   78216 command_runner.go:124] >       ],
	I0813 20:25:08.719146   78216 command_runner.go:124] >       "size": "689817",
	I0813 20:25:08.719154   78216 command_runner.go:124] >       "uid": null,
	I0813 20:25:08.719161   78216 command_runner.go:124] >       "username": "",
	I0813 20:25:08.719170   78216 command_runner.go:124] >       "spec": null
	I0813 20:25:08.719176   78216 command_runner.go:124] >     }
	I0813 20:25:08.719182   78216 command_runner.go:124] >   ]
	I0813 20:25:08.719188   78216 command_runner.go:124] > }
	I0813 20:25:08.719363   78216 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:25:08.719382   78216 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:25:08.719441   78216 ssh_runner.go:149] Run: crio config
	I0813 20:25:08.778051   78216 command_runner.go:124] ! time="2021-08-13T20:25:08Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 20:25:08.781021   78216 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 20:25:08.783459   78216 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 20:25:08.783476   78216 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 20:25:08.783483   78216 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 20:25:08.783487   78216 command_runner.go:124] > #
	I0813 20:25:08.783494   78216 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 20:25:08.783501   78216 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 20:25:08.783508   78216 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 20:25:08.783517   78216 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 20:25:08.783521   78216 command_runner.go:124] > # reload'.
	I0813 20:25:08.783527   78216 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 20:25:08.783538   78216 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 20:25:08.783548   78216 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 20:25:08.783563   78216 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 20:25:08.783569   78216 command_runner.go:124] > [crio]
	I0813 20:25:08.783576   78216 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 20:25:08.783584   78216 command_runner.go:124] > # containers images, in this directory.
	I0813 20:25:08.783588   78216 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 20:25:08.783599   78216 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 20:25:08.783606   78216 command_runner.go:124] > #runroot = "/run/containers/storage"
	I0813 20:25:08.783613   78216 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 20:25:08.783622   78216 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 20:25:08.783629   78216 command_runner.go:124] > #storage_driver = "overlay"
	I0813 20:25:08.783635   78216 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 20:25:08.783643   78216 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 20:25:08.783649   78216 command_runner.go:124] > #storage_option = [
	I0813 20:25:08.783653   78216 command_runner.go:124] > #	"overlay.mountopt=nodev",
	I0813 20:25:08.783659   78216 command_runner.go:124] > #]
	I0813 20:25:08.783666   78216 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 20:25:08.783674   78216 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 20:25:08.783678   78216 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 20:25:08.783684   78216 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 20:25:08.783693   78216 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 20:25:08.783700   78216 command_runner.go:124] > # always happen on a node reboot
	I0813 20:25:08.783707   78216 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 20:25:08.783712   78216 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 20:25:08.783721   78216 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 20:25:08.783728   78216 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 20:25:08.783737   78216 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 20:25:08.783748   78216 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 20:25:08.783754   78216 command_runner.go:124] > [crio.api]
	I0813 20:25:08.783760   78216 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 20:25:08.783767   78216 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 20:25:08.783772   78216 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 20:25:08.783779   78216 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 20:25:08.783786   78216 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 20:25:08.783793   78216 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 20:25:08.783797   78216 command_runner.go:124] > stream_port = "0"
	I0813 20:25:08.783803   78216 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 20:25:08.783809   78216 command_runner.go:124] > stream_enable_tls = false
	I0813 20:25:08.783817   78216 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 20:25:08.783824   78216 command_runner.go:124] > stream_idle_timeout = ""
	I0813 20:25:08.783834   78216 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 20:25:08.783842   78216 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 20:25:08.783846   78216 command_runner.go:124] > # minutes.
	I0813 20:25:08.783850   78216 command_runner.go:124] > stream_tls_cert = ""
	I0813 20:25:08.783856   78216 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 20:25:08.783865   78216 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 20:25:08.783869   78216 command_runner.go:124] > stream_tls_key = ""
	I0813 20:25:08.783875   78216 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 20:25:08.783883   78216 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 20:25:08.783892   78216 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 20:25:08.783896   78216 command_runner.go:124] > stream_tls_ca = ""
	I0813 20:25:08.783904   78216 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:25:08.783911   78216 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 20:25:08.783919   78216 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:25:08.783925   78216 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 20:25:08.783932   78216 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 20:25:08.783940   78216 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 20:25:08.783944   78216 command_runner.go:124] > [crio.runtime]
	I0813 20:25:08.783952   78216 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 20:25:08.783963   78216 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 20:25:08.783974   78216 command_runner.go:124] > # "nofile=1024:2048"
	I0813 20:25:08.783988   78216 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 20:25:08.783997   78216 command_runner.go:124] > #default_ulimits = [
	I0813 20:25:08.784002   78216 command_runner.go:124] > #]
	I0813 20:25:08.784014   78216 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 20:25:08.784022   78216 command_runner.go:124] > no_pivot = false
	I0813 20:25:08.784032   78216 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 20:25:08.784050   78216 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 20:25:08.784057   78216 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 20:25:08.784063   78216 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 20:25:08.784070   78216 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 20:25:08.784074   78216 command_runner.go:124] > conmon = ""
	I0813 20:25:08.784079   78216 command_runner.go:124] > # Cgroup setting for conmon
	I0813 20:25:08.784086   78216 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 20:25:08.784093   78216 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 20:25:08.784101   78216 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 20:25:08.784105   78216 command_runner.go:124] > conmon_env = [
	I0813 20:25:08.784111   78216 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 20:25:08.784116   78216 command_runner.go:124] > ]
	I0813 20:25:08.784124   78216 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 20:25:08.784132   78216 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 20:25:08.784137   78216 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 20:25:08.784141   78216 command_runner.go:124] > default_env = [
	I0813 20:25:08.784144   78216 command_runner.go:124] > ]
	I0813 20:25:08.784150   78216 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 20:25:08.784156   78216 command_runner.go:124] > selinux = false
	I0813 20:25:08.784163   78216 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 20:25:08.784171   78216 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 20:25:08.784177   78216 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 20:25:08.784183   78216 command_runner.go:124] > seccomp_profile = ""
	I0813 20:25:08.784189   78216 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 20:25:08.784197   78216 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 20:25:08.784203   78216 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 20:25:08.784210   78216 command_runner.go:124] > # which might increase security.
	I0813 20:25:08.784214   78216 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 20:25:08.784221   78216 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 20:25:08.784231   78216 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 20:25:08.784238   78216 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 20:25:08.784247   78216 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 20:25:08.784254   78216 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:25:08.784261   78216 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 20:25:08.784267   78216 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 20:25:08.784273   78216 command_runner.go:124] > # irqbalance daemon.
	I0813 20:25:08.784278   78216 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 20:25:08.784286   78216 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 20:25:08.784290   78216 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 20:25:08.784297   78216 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 20:25:08.784303   78216 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 20:25:08.784309   78216 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 20:25:08.784324   78216 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 20:25:08.784330   78216 command_runner.go:124] > # will be added.
	I0813 20:25:08.784334   78216 command_runner.go:124] > default_capabilities = [
	I0813 20:25:08.784338   78216 command_runner.go:124] > 	"CHOWN",
	I0813 20:25:08.784341   78216 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 20:25:08.784345   78216 command_runner.go:124] > 	"FSETID",
	I0813 20:25:08.784348   78216 command_runner.go:124] > 	"FOWNER",
	I0813 20:25:08.784352   78216 command_runner.go:124] > 	"SETGID",
	I0813 20:25:08.784355   78216 command_runner.go:124] > 	"SETUID",
	I0813 20:25:08.784359   78216 command_runner.go:124] > 	"SETPCAP",
	I0813 20:25:08.784363   78216 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 20:25:08.784368   78216 command_runner.go:124] > 	"KILL",
	I0813 20:25:08.784374   78216 command_runner.go:124] > ]
	I0813 20:25:08.784381   78216 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 20:25:08.784389   78216 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:25:08.784396   78216 command_runner.go:124] > default_sysctls = [
	I0813 20:25:08.784398   78216 command_runner.go:124] > ]
	I0813 20:25:08.784403   78216 command_runner.go:124] > # List of additional devices. specified as
	I0813 20:25:08.784414   78216 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 20:25:08.784421   78216 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 20:25:08.784428   78216 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:25:08.784434   78216 command_runner.go:124] > additional_devices = [
	I0813 20:25:08.784438   78216 command_runner.go:124] > ]
	I0813 20:25:08.784444   78216 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 20:25:08.784452   78216 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 20:25:08.784456   78216 command_runner.go:124] > hooks_dir = [
	I0813 20:25:08.784462   78216 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 20:25:08.784469   78216 command_runner.go:124] > ]
	I0813 20:25:08.784476   78216 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 20:25:08.784488   78216 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 20:25:08.784495   78216 command_runner.go:124] > # its default mounts from the following two files:
	I0813 20:25:08.784498   78216 command_runner.go:124] > #
	I0813 20:25:08.784505   78216 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 20:25:08.784513   78216 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 20:25:08.784519   78216 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 20:25:08.784525   78216 command_runner.go:124] > #
	I0813 20:25:08.784531   78216 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 20:25:08.784540   78216 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 20:25:08.784547   78216 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 20:25:08.784554   78216 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 20:25:08.784557   78216 command_runner.go:124] > #
	I0813 20:25:08.784561   78216 command_runner.go:124] > #default_mounts_file = ""
	I0813 20:25:08.784570   78216 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 20:25:08.784573   78216 command_runner.go:124] > pids_limit = 1024
	I0813 20:25:08.784583   78216 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 20:25:08.784589   78216 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 20:25:08.784598   78216 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 20:25:08.784607   78216 command_runner.go:124] > # limit is never exceeded.
	I0813 20:25:08.784613   78216 command_runner.go:124] > log_size_max = -1
	I0813 20:25:08.784633   78216 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 20:25:08.784639   78216 command_runner.go:124] > log_to_journald = false
	I0813 20:25:08.784647   78216 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 20:25:08.784656   78216 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 20:25:08.784661   78216 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 20:25:08.784669   78216 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 20:25:08.784674   78216 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 20:25:08.784680   78216 command_runner.go:124] > bind_mount_prefix = ""
	I0813 20:25:08.784687   78216 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 20:25:08.784692   78216 command_runner.go:124] > read_only = false
	I0813 20:25:08.784699   78216 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 20:25:08.784712   78216 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 20:25:08.784719   78216 command_runner.go:124] > # live configuration reload.
	I0813 20:25:08.784723   78216 command_runner.go:124] > log_level = "info"
	I0813 20:25:08.784732   78216 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 20:25:08.784737   78216 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:25:08.784743   78216 command_runner.go:124] > log_filter = ""
	I0813 20:25:08.784750   78216 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 20:25:08.784756   78216 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 20:25:08.784764   78216 command_runner.go:124] > # separated by comma.
	I0813 20:25:08.784767   78216 command_runner.go:124] > uid_mappings = ""
	I0813 20:25:08.784774   78216 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 20:25:08.784782   78216 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 20:25:08.784786   78216 command_runner.go:124] > # separated by comma.
	I0813 20:25:08.784790   78216 command_runner.go:124] > gid_mappings = ""
	I0813 20:25:08.784796   78216 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 20:25:08.784804   78216 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 20:25:08.784810   78216 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 20:25:08.784816   78216 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 20:25:08.784822   78216 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 20:25:08.784829   78216 command_runner.go:124] > # and manage their lifecycle.
	I0813 20:25:08.784836   78216 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 20:25:08.784842   78216 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 20:25:08.784848   78216 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 20:25:08.784854   78216 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 20:25:08.784861   78216 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 20:25:08.784866   78216 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 20:25:08.784872   78216 command_runner.go:124] > drop_infra_ctr = false
	I0813 20:25:08.784879   78216 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 20:25:08.784887   78216 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 20:25:08.784894   78216 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 20:25:08.784902   78216 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 20:25:08.784909   78216 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 20:25:08.784916   78216 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 20:25:08.784920   78216 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 20:25:08.784927   78216 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 20:25:08.784934   78216 command_runner.go:124] > pinns_path = ""
	I0813 20:25:08.784940   78216 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 20:25:08.784949   78216 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 20:25:08.784959   78216 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 20:25:08.784967   78216 command_runner.go:124] > default_runtime = "runc"
	I0813 20:25:08.784973   78216 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 20:25:08.784982   78216 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 20:25:08.784988   78216 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 20:25:08.784997   78216 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 20:25:08.785000   78216 command_runner.go:124] > #
	I0813 20:25:08.785004   78216 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 20:25:08.785011   78216 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 20:25:08.785017   78216 command_runner.go:124] > #  runtime_type = "oci"
	I0813 20:25:08.785024   78216 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 20:25:08.785029   78216 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 20:25:08.785033   78216 command_runner.go:124] > #  allowed_annotations = []
	I0813 20:25:08.785036   78216 command_runner.go:124] > # Where:
	I0813 20:25:08.785042   78216 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 20:25:08.785050   78216 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 20:25:08.785057   78216 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 20:25:08.785065   78216 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 20:25:08.785069   78216 command_runner.go:124] > #   in $PATH.
	I0813 20:25:08.785075   78216 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 20:25:08.785082   78216 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 20:25:08.785089   78216 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 20:25:08.785095   78216 command_runner.go:124] > #   state.
	I0813 20:25:08.785101   78216 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 20:25:08.785109   78216 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 20:25:08.785115   78216 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 20:25:08.785127   78216 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 20:25:08.785132   78216 command_runner.go:124] > #   The currently recognized values are:
	I0813 20:25:08.785141   78216 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 20:25:08.785150   78216 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 20:25:08.785158   78216 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 20:25:08.785162   78216 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 20:25:08.785171   78216 command_runner.go:124] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0813 20:25:08.785175   78216 command_runner.go:124] > runtime_type = "oci"
	I0813 20:25:08.785182   78216 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 20:25:08.785188   78216 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 20:25:08.785197   78216 command_runner.go:124] > # running containers
	I0813 20:25:08.785204   78216 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 20:25:08.785211   78216 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 20:25:08.785222   78216 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 20:25:08.785227   78216 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 20:25:08.785235   78216 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 20:25:08.785240   78216 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 20:25:08.785246   78216 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 20:25:08.785251   78216 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 20:25:08.785258   78216 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 20:25:08.785262   78216 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 20:25:08.785271   78216 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 20:25:08.785276   78216 command_runner.go:124] > #
	I0813 20:25:08.785285   78216 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 20:25:08.785292   78216 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 20:25:08.785300   78216 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 20:25:08.785307   78216 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 20:25:08.785319   78216 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 20:25:08.785323   78216 command_runner.go:124] > [crio.image]
	I0813 20:25:08.785329   78216 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 20:25:08.785333   78216 command_runner.go:124] > default_transport = "docker://"
	I0813 20:25:08.785342   78216 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 20:25:08.785348   78216 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:25:08.785354   78216 command_runner.go:124] > global_auth_file = ""
	I0813 20:25:08.785360   78216 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 20:25:08.785367   78216 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:25:08.785372   78216 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 20:25:08.785381   78216 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 20:25:08.785387   78216 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:25:08.785394   78216 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:25:08.785398   78216 command_runner.go:124] > pause_image_auth_file = ""
	I0813 20:25:08.785406   78216 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 20:25:08.785417   78216 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 20:25:08.785427   78216 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 20:25:08.785434   78216 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 20:25:08.785440   78216 command_runner.go:124] > pause_command = "/pause"
	I0813 20:25:08.785447   78216 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 20:25:08.785455   78216 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 20:25:08.785462   78216 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 20:25:08.785470   78216 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 20:25:08.785478   78216 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 20:25:08.785482   78216 command_runner.go:124] > signature_policy = ""
	I0813 20:25:08.785507   78216 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 20:25:08.785519   78216 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 20:25:08.785534   78216 command_runner.go:124] > # changing them here.
	I0813 20:25:08.785539   78216 command_runner.go:124] > #insecure_registries = "[]"
	I0813 20:25:08.785545   78216 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 20:25:08.785553   78216 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 20:25:08.785557   78216 command_runner.go:124] > image_volumes = "mkdir"
	I0813 20:25:08.785563   78216 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 20:25:08.785571   78216 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 20:25:08.785582   78216 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 20:25:08.785590   78216 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 20:25:08.785594   78216 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 20:25:08.785598   78216 command_runner.go:124] > #registries = [
	I0813 20:25:08.785601   78216 command_runner.go:124] > # ]
	I0813 20:25:08.785607   78216 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 20:25:08.785616   78216 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 20:25:08.785623   78216 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 20:25:08.785629   78216 command_runner.go:124] > # CNI plugins.
	I0813 20:25:08.785633   78216 command_runner.go:124] > [crio.network]
	I0813 20:25:08.785642   78216 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 20:25:08.785648   78216 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 20:25:08.785655   78216 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 20:25:08.785661   78216 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 20:25:08.785668   78216 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 20:25:08.785673   78216 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 20:25:08.785679   78216 command_runner.go:124] > plugin_dirs = [
	I0813 20:25:08.785687   78216 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 20:25:08.785694   78216 command_runner.go:124] > ]
	I0813 20:25:08.785707   78216 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 20:25:08.785717   78216 command_runner.go:124] > [crio.metrics]
	I0813 20:25:08.785727   78216 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 20:25:08.785737   78216 command_runner.go:124] > enable_metrics = false
	I0813 20:25:08.785746   78216 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 20:25:08.785754   78216 command_runner.go:124] > metrics_port = 9090
	I0813 20:25:08.785776   78216 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 20:25:08.785782   78216 command_runner.go:124] > metrics_socket = ""
	I0813 20:25:08.785851   78216 cni.go:93] Creating CNI manager for ""
	I0813 20:25:08.785865   78216 cni.go:154] 1 nodes found, recommending kindnet
	I0813 20:25:08.785874   78216 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:25:08.785886   78216 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813202501-13784 NodeName:multinode-20210813202501-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:25:08.786035   78216 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813202501-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:25:08.786119   78216 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-20210813202501-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202501-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:25:08.786168   78216 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:25:08.792601   78216 command_runner.go:124] > kubeadm
	I0813 20:25:08.792621   78216 command_runner.go:124] > kubectl
	I0813 20:25:08.792627   78216 command_runner.go:124] > kubelet
	I0813 20:25:08.792647   78216 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:25:08.792685   78216 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:25:08.798926   78216 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (561 bytes)
	I0813 20:25:08.810187   78216 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:25:08.821528   78216 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I0813 20:25:08.832728   78216 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:25:08.835337   78216 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:25:08.843339   78216 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784 for IP: 192.168.49.2
	I0813 20:25:08.843387   78216 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:25:08.843410   78216 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:25:08.843468   78216 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.key
	I0813 20:25:08.843479   78216 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.crt with IP's: []
	I0813 20:25:09.017012   78216 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.crt ...
	I0813 20:25:09.017047   78216 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.crt: {Name:mk151c1c6879a499d598c51d20b9ad635e41f394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:09.017252   78216 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.key ...
	I0813 20:25:09.017267   78216 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.key: {Name:mk6bd1bd8f5f4eea54d1002a49cc9c8826a9818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:09.017353   78216 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.key.dd3b5fb2
	I0813 20:25:09.017363   78216 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:25:09.096947   78216 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.crt.dd3b5fb2 ...
	I0813 20:25:09.096987   78216 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.crt.dd3b5fb2: {Name:mka8ddcf0a097389b84ba817132176344df1e694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:09.097207   78216 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.key.dd3b5fb2 ...
	I0813 20:25:09.097224   78216 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.key.dd3b5fb2: {Name:mk4c5653a8ffdaed3bef2b10b4511626772f398c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:09.097314   78216 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.crt
	I0813 20:25:09.097380   78216 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.key
	I0813 20:25:09.097435   78216 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.key
	I0813 20:25:09.097446   78216 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.crt with IP's: []
	I0813 20:25:09.275169   78216 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.crt ...
	I0813 20:25:09.275202   78216 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.crt: {Name:mkdd0edcb69655a5c5504adb2b9e2b183fe8fe5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:09.275401   78216 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.key ...
	I0813 20:25:09.275415   78216 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.key: {Name:mk77d0c517e7eb4bbee528bca8c3ef57754c831a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:09.275494   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0813 20:25:09.275510   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0813 20:25:09.275523   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0813 20:25:09.275537   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0813 20:25:09.275545   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 20:25:09.275558   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 20:25:09.275567   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 20:25:09.275578   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 20:25:09.275624   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:25:09.275661   78216 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:25:09.275670   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:25:09.275694   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:25:09.275719   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:25:09.275740   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:25:09.275782   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:25:09.275815   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem -> /usr/share/ca-certificates/13784.pem
	I0813 20:25:09.275828   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> /usr/share/ca-certificates/137842.pem
	I0813 20:25:09.275837   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:25:09.276686   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:25:09.358414   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:25:09.375021   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:25:09.390122   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:25:09.405514   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:25:09.420585   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:25:09.435510   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:25:09.450587   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:25:09.465302   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:25:09.480429   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:25:09.495451   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:25:09.510684   78216 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:25:09.521757   78216 ssh_runner.go:149] Run: openssl version
	I0813 20:25:09.526142   78216 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0813 20:25:09.526202   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:25:09.532682   78216 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:25:09.535296   78216 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:25:09.535369   78216 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:25:09.535418   78216 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:25:09.539592   78216 command_runner.go:124] > 51391683
	I0813 20:25:09.539737   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:25:09.546007   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:25:09.552479   78216 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:25:09.555209   78216 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:25:09.555237   78216 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:25:09.555273   78216 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:25:09.559455   78216 command_runner.go:124] > 3ec20f2e
	I0813 20:25:09.559645   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:25:09.566159   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:25:09.572793   78216 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:25:09.575606   78216 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:25:09.575658   78216 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:25:09.575695   78216 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:25:09.580171   78216 command_runner.go:124] > b5213941
	I0813 20:25:09.580222   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:25:09.586656   78216 kubeadm.go:390] StartCluster: {Name:multinode-20210813202501-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202501-13784 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:25:09.586738   78216 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:25:09.586768   78216 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:25:09.608509   78216 cri.go:76] found id: ""
	I0813 20:25:09.608577   78216 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:25:09.614143   78216 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0813 20:25:09.614172   78216 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0813 20:25:09.614184   78216 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0813 20:25:09.614727   78216 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:25:09.620750   78216 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:25:09.620792   78216 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:25:09.626704   78216 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0813 20:25:09.626729   78216 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0813 20:25:09.626741   78216 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0813 20:25:09.626759   78216 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:25:09.626799   78216 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:25:09.626836   78216 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:25:09.677269   78216 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0813 20:25:09.677554   78216 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 20:25:09.705506   78216 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0813 20:25:09.705580   78216 command_runner.go:124] > KERNEL_VERSION: 4.9.0-16-amd64
	I0813 20:25:09.705619   78216 command_runner.go:124] > OS: Linux
	I0813 20:25:09.705671   78216 command_runner.go:124] > CGROUPS_CPU: enabled
	I0813 20:25:09.705726   78216 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0813 20:25:09.705772   78216 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0813 20:25:09.705863   78216 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0813 20:25:09.705933   78216 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0813 20:25:09.706005   78216 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0813 20:25:09.706068   78216 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0813 20:25:09.706135   78216 command_runner.go:124] > CGROUPS_HUGETLB: missing
	I0813 20:25:09.775034   78216 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 20:25:09.775182   78216 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 20:25:09.775314   78216 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 20:25:09.900424   78216 out.go:204]   - Generating certificates and keys ...
	I0813 20:25:09.897385   78216 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 20:25:09.900596   78216 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0813 20:25:09.900705   78216 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0813 20:25:09.988819   78216 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0813 20:25:10.156097   78216 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0813 20:25:10.240092   78216 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0813 20:25:10.330488   78216 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0813 20:25:10.596984   78216 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0813 20:25:10.597166   78216 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210813202501-13784] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0813 20:25:10.794299   78216 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0813 20:25:10.794439   78216 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210813202501-13784] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0813 20:25:10.925556   78216 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0813 20:25:11.071211   78216 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0813 20:25:11.245830   78216 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0813 20:25:11.245931   78216 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 20:25:11.499076   78216 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 20:25:11.688356   78216 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 20:25:11.849216   78216 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 20:25:11.947812   78216 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 20:25:11.955021   78216 command_runner.go:124] > [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
	I0813 20:25:11.955169   78216 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 20:25:11.955970   78216 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 20:25:11.956025   78216 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 20:25:12.015282   78216 out.go:204]   - Booting up control plane ...
	I0813 20:25:12.013475   78216 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 20:25:12.015439   78216 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 20:25:12.020925   78216 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 20:25:12.021868   78216 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 20:25:12.022566   78216 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 20:25:12.024656   78216 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 20:25:26.026960   78216 command_runner.go:124] > [apiclient] All control plane components are healthy after 14.002288 seconds
	I0813 20:25:26.027147   78216 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 20:25:26.038160   78216 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 20:25:26.553943   78216 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0813 20:25:26.554170   78216 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210813202501-13784 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 20:25:27.061106   78216 out.go:204]   - Configuring RBAC rules ...
	I0813 20:25:27.061121   78216 command_runner.go:124] > [bootstrap-token] Using token: b8nglx.fy9vbvammc8psc88
	I0813 20:25:27.061268   78216 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 20:25:27.066172   78216 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 20:25:27.072050   78216 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 20:25:27.073895   78216 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 20:25:27.075710   78216 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 20:25:27.077461   78216 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 20:25:27.083621   78216 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 20:25:27.225555   78216 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0813 20:25:27.470275   78216 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0813 20:25:27.471184   78216 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0813 20:25:27.471309   78216 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0813 20:25:27.471360   78216 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0813 20:25:27.471446   78216 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 20:25:27.471525   78216 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 20:25:27.471602   78216 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0813 20:25:27.471669   78216 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 20:25:27.471743   78216 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0813 20:25:27.471844   78216 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 20:25:27.471941   78216 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 20:25:27.472040   78216 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0813 20:25:27.472157   78216 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0813 20:25:27.472277   78216 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token b8nglx.fy9vbvammc8psc88 \
	I0813 20:25:27.472415   78216 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:c4abb71b090fb6a33c758a3743cc840f782cf9be45db9979473fed7ebf39bccf \
	I0813 20:25:27.472445   78216 command_runner.go:124] > 	--control-plane 
	I0813 20:25:27.472580   78216 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0813 20:25:27.472689   78216 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token b8nglx.fy9vbvammc8psc88 \
	I0813 20:25:27.472800   78216 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:c4abb71b090fb6a33c758a3743cc840f782cf9be45db9979473fed7ebf39bccf 
	I0813 20:25:27.473714   78216 command_runner.go:124] ! 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	I0813 20:25:27.473804   78216 command_runner.go:124] ! 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0813 20:25:27.474055   78216 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
	I0813 20:25:27.474160   78216 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 20:25:27.474208   78216 cni.go:93] Creating CNI manager for ""
	I0813 20:25:27.474219   78216 cni.go:154] 1 nodes found, recommending kindnet
	I0813 20:25:27.475967   78216 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:25:27.476018   78216 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:25:27.479307   78216 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 20:25:27.479324   78216 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0813 20:25:27.479330   78216 command_runner.go:124] > Device: 801h/2049d	Inode: 4333431     Links: 1
	I0813 20:25:27.479337   78216 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:25:27.479345   78216 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0813 20:25:27.479351   78216 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0813 20:25:27.479356   78216 command_runner.go:124] > Change: 2021-08-10 21:18:56.705166650 +0000
	I0813 20:25:27.479361   78216 command_runner.go:124] >  Birth: -
	I0813 20:25:27.479445   78216 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:25:27.479461   78216 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:25:27.491353   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:25:27.799956   78216 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0813 20:25:27.803441   78216 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0813 20:25:27.808399   78216 command_runner.go:124] > serviceaccount/kindnet created
	I0813 20:25:27.814932   78216 command_runner.go:124] > daemonset.apps/kindnet created
	I0813 20:25:27.819001   78216 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:25:27.819128   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=multinode-20210813202501-13784 minikube.k8s.io/updated_at=2021_08_13T20_25_27_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:27.819126   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:27.835266   78216 command_runner.go:124] > -16
	I0813 20:25:27.835312   78216 ops.go:34] apiserver oom_adj: -16
	I0813 20:25:27.964537   78216 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0813 20:25:27.964627   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:27.964681   78216 command_runner.go:124] > node/multinode-20210813202501-13784 labeled
	I0813 20:25:28.025636   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:28.526437   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:28.588237   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:29.026860   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:29.089598   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:29.526110   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:29.607407   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:30.025970   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:30.087598   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:30.526205   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:30.587251   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:31.026883   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:31.090139   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:31.526881   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:31.588731   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:32.026611   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:32.089269   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:32.526797   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:32.596170   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:33.026841   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:33.089722   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:33.525906   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:33.585708   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:34.026346   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:34.089135   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:34.526799   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:34.589264   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:35.026813   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:35.088209   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:35.526797   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:35.587340   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:36.026111   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:36.087468   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:36.526229   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:37.189082   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:37.525993   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:39.089283   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:39.092873   78216 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.566840648s)
	I0813 20:25:39.526460   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:41.505285   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:41.507965   78216 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.981461478s)
	I0813 20:25:41.526076   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:41.674232   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:42.026654   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:42.092046   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:42.526579   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:42.588433   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:43.025907   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:43.088594   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:43.526047   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:43.588478   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:44.026039   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:44.088518   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:44.525984   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:44.589980   78216 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 20:25:45.026552   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:25:45.087966   78216 command_runner.go:124] > NAME      SECRETS   AGE
	I0813 20:25:45.087987   78216 command_runner.go:124] > default   1         1s
	I0813 20:25:45.090037   78216 kubeadm.go:985] duration metric: took 17.270980396s to wait for elevateKubeSystemPrivileges.
	I0813 20:25:45.090064   78216 kubeadm.go:392] StartCluster complete in 35.503416104s
	I0813 20:25:45.090079   78216 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:45.090176   78216 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:45.091681   78216 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:25:45.092297   78216 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:45.092623   78216 kapi.go:59] client config for multinode-20210813202501-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-
13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:45.093264   78216 cert_rotation.go:137] Starting client certificate rotation controller
	I0813 20:25:45.094871   78216 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:45.094887   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:45.094892   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:45.094896   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:45.102773   78216 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 20:25:45.102789   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:45.102793   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:45.102797   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:45.102800   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:45.102803   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:45.102806   78216 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:45.102811   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:45 GMT
	I0813 20:25:45.102834   78216 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c820a5fe-cc2f-463a-95bb-937191445498","resourceVersion":"383","creationTimestamp":"2021-08-13T20:25:27Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:45.103416   78216 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c820a5fe-cc2f-463a-95bb-937191445498","resourceVersion":"383","creationTimestamp":"2021-08-13T20:25:27Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:45.103466   78216 round_trippers.go:432] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:45.103478   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:45.103484   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:45.103490   78216 round_trippers.go:442]     Content-Type: application/json
	I0813 20:25:45.103496   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:45.106456   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:45.106472   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:45.106478   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:45.106482   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:45.106486   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:45.106490   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:45.106495   78216 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:45.106499   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:45 GMT
	I0813 20:25:45.106518   78216 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c820a5fe-cc2f-463a-95bb-937191445498","resourceVersion":"402","creationTimestamp":"2021-08-13T20:25:27Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:45.607375   78216 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 20:25:45.607399   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:45.607407   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:45.607412   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:45.608882   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:45.608900   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:45.608904   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:45.608908   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:45.608911   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:45.608914   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:45.608917   78216 round_trippers.go:463]     Content-Length: 291
	I0813 20:25:45.608920   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:45 GMT
	I0813 20:25:45.608937   78216 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c820a5fe-cc2f-463a-95bb-937191445498","resourceVersion":"444","creationTimestamp":"2021-08-13T20:25:27Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0813 20:25:45.609026   78216 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210813202501-13784" rescaled to 1
	I0813 20:25:45.609067   78216 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:25:45.610772   78216 out.go:177] * Verifying Kubernetes components...
	I0813 20:25:45.609134   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:25:45.609159   78216 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:25:45.610961   78216 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210813202501-13784"
	I0813 20:25:45.610981   78216 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210813202501-13784"
	W0813 20:25:45.610991   78216 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:25:45.609393   78216 config.go:177] Loaded profile config "multinode-20210813202501-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:25:45.611021   78216 host.go:66] Checking if "multinode-20210813202501-13784" exists ...
	I0813 20:25:45.611032   78216 addons.go:59] Setting default-storageclass=true in profile "multinode-20210813202501-13784"
	I0813 20:25:45.610838   78216 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:25:45.611060   78216 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210813202501-13784"
	I0813 20:25:45.611418   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Status}}
	I0813 20:25:45.611591   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Status}}
	I0813 20:25:45.655228   78216 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:45.655483   78216 kapi.go:59] client config for multinode-20210813202501-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-
13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:45.657235   78216 round_trippers.go:432] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0813 20:25:45.657250   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:45.657257   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:45.657262   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:45.660751   78216 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:25:45.659901   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:45.660815   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:45.660824   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:45.660828   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:45.660832   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:45.660837   78216 round_trippers.go:463]     Content-Length: 109
	I0813 20:25:45.660842   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:45 GMT
	I0813 20:25:45.660847   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:45.660866   78216 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"445"},"items":[]}
	I0813 20:25:45.660875   78216 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:25:45.660887   78216 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:25:45.660953   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:45.661461   78216 addons.go:135] Setting addon default-storageclass=true in "multinode-20210813202501-13784"
	W0813 20:25:45.661477   78216 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:25:45.661516   78216 host.go:66] Checking if "multinode-20210813202501-13784" exists ...
	I0813 20:25:45.662039   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Status}}
	I0813 20:25:45.705036   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:25:45.706753   78216 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:25:45.706770   78216 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:25:45.706820   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:25:45.712132   78216 command_runner.go:124] > apiVersion: v1
	I0813 20:25:45.712144   78216 command_runner.go:124] > data:
	I0813 20:25:45.712148   78216 command_runner.go:124] >   Corefile: |
	I0813 20:25:45.712152   78216 command_runner.go:124] >     .:53 {
	I0813 20:25:45.712156   78216 command_runner.go:124] >         errors
	I0813 20:25:45.712161   78216 command_runner.go:124] >         health {
	I0813 20:25:45.712166   78216 command_runner.go:124] >            lameduck 5s
	I0813 20:25:45.712170   78216 command_runner.go:124] >         }
	I0813 20:25:45.712173   78216 command_runner.go:124] >         ready
	I0813 20:25:45.712183   78216 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0813 20:25:45.712187   78216 command_runner.go:124] >            pods insecure
	I0813 20:25:45.712193   78216 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0813 20:25:45.712198   78216 command_runner.go:124] >            ttl 30
	I0813 20:25:45.712202   78216 command_runner.go:124] >         }
	I0813 20:25:45.712209   78216 command_runner.go:124] >         prometheus :9153
	I0813 20:25:45.712216   78216 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0813 20:25:45.712223   78216 command_runner.go:124] >            max_concurrent 1000
	I0813 20:25:45.712228   78216 command_runner.go:124] >         }
	I0813 20:25:45.712235   78216 command_runner.go:124] >         cache 30
	I0813 20:25:45.712240   78216 command_runner.go:124] >         loop
	I0813 20:25:45.712246   78216 command_runner.go:124] >         reload
	I0813 20:25:45.712252   78216 command_runner.go:124] >         loadbalance
	I0813 20:25:45.712257   78216 command_runner.go:124] >     }
	I0813 20:25:45.712263   78216 command_runner.go:124] > kind: ConfigMap
	I0813 20:25:45.712269   78216 command_runner.go:124] > metadata:
	I0813 20:25:45.712277   78216 command_runner.go:124] >   creationTimestamp: "2021-08-13T20:25:27Z"
	I0813 20:25:45.712281   78216 command_runner.go:124] >   name: coredns
	I0813 20:25:45.712287   78216 command_runner.go:124] >   namespace: kube-system
	I0813 20:25:45.712291   78216 command_runner.go:124] >   resourceVersion: "250"
	I0813 20:25:45.712296   78216 command_runner.go:124] >   uid: 09ed3532-9b20-4c93-942e-749628aa0249
	I0813 20:25:45.714376   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:25:45.714627   78216 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:25:45.714894   78216 kapi.go:59] client config for multinode-20210813202501-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-
13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:25:45.716045   78216 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813202501-13784" to be "Ready" ...
	I0813 20:25:45.716129   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:45.716141   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:45.716148   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:45.716154   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:45.718420   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:45.718436   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:45.718443   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:45.718448   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:45.718453   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:45.718467   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:45 GMT
	I0813 20:25:45.718472   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:45.718572   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:45.719857   78216 node_ready.go:49] node "multinode-20210813202501-13784" has status "Ready":"True"
	I0813 20:25:45.719871   78216 node_ready.go:38] duration metric: took 3.803225ms waiting for node "multinode-20210813202501-13784" to be "Ready" ...
	I0813 20:25:45.719879   78216 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:25:45.719938   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 20:25:45.719943   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:45.719948   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:45.719951   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:45.723560   78216 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:25:45.723575   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:45.723582   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:45 GMT
	I0813 20:25:45.723587   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:45.723591   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:45.723595   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:45.723600   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:45.724123   78216 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-558bd4d5db-cswgt","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"275a3fb1-5920-49ed-856d-9634250ee2dc","resourceVersion":"438","creationTimestamp":"2021-08-13T20:25:45Z","deletionTimestamp":"2021-08-13T20:26:15Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a02
7b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:control [truncated 54682 chars]
	I0813 20:25:45.733075   78216 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-cswgt" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:45.733168   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-cswgt
	I0813 20:25:45.733183   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:45.733191   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:45.733199   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:45.735256   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:45.735272   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:45.735278   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:45.735282   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:45.735287   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:45.735291   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:45.735295   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:45 GMT
	I0813 20:25:45.735429   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-cswgt","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"275a3fb1-5920-49ed-856d-9634250ee2dc","resourceVersion":"438","creationTimestamp":"2021-08-13T20:25:45Z","deletionTimestamp":"2021-08-13T20:26:15Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5702 chars]
	I0813 20:25:45.739502   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:45.739522   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:45.739529   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:45.739535   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:45.741233   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:45.741246   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:45.741252   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:45 GMT
	I0813 20:25:45.741257   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:45.741261   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:45.741266   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:45.741270   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:45.741406   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:45.749857   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:25:45.868421   78216 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:25:45.871011   78216 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:25:46.085135   78216 command_runner.go:124] > configmap/coredns replaced
	I0813 20:25:46.088596   78216 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 20:25:46.210484   78216 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0813 20:25:46.214829   78216 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0813 20:25:46.221227   78216 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 20:25:46.242436   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-cswgt
	I0813 20:25:46.242454   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:46.242460   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:46.242464   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:46.259246   78216 round_trippers.go:457] Response Status: 200 OK in 16 milliseconds
	I0813 20:25:46.259266   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:46.259273   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:46.259278   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:46 GMT
	I0813 20:25:46.259284   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:46.259293   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:46.259301   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:46.259420   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-cswgt","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"275a3fb1-5920-49ed-856d-9634250ee2dc","resourceVersion":"438","creationTimestamp":"2021-08-13T20:25:45Z","deletionTimestamp":"2021-08-13T20:26:15Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 5702 chars]
	I0813 20:25:46.259844   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:46.259861   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:46.259868   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:46.259874   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:46.262827   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:46.262845   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:46.262851   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:46.262856   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:46 GMT
	I0813 20:25:46.262862   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:46.262867   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:46.262873   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:46.262982   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:46.263170   78216 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 20:25:46.268217   78216 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0813 20:25:46.279035   78216 command_runner.go:124] > pod/storage-provisioner created
	I0813 20:25:46.283189   78216 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0813 20:25:46.285361   78216 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:25:46.285385   78216 addons.go:344] enableAddons completed in 676.242873ms
	I0813 20:25:46.742840   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-cswgt
	I0813 20:25:46.742860   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:46.742866   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:46.742870   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:46.744445   78216 round_trippers.go:457] Response Status: 404 Not Found in 1 milliseconds
	I0813 20:25:46.744465   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:46.744470   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:46.744473   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:46.744476   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:46.744480   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:46.744484   78216 round_trippers.go:463]     Content-Length: 216
	I0813 20:25:46.744489   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:46 GMT
	I0813 20:25:46.744516   78216 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-558bd4d5db-cswgt\" not found","reason":"NotFound","details":{"name":"coredns-558bd4d5db-cswgt","kind":"pods"},"code":404}
	I0813 20:25:46.744961   78216 pod_ready.go:97] error getting pod "coredns-558bd4d5db-cswgt" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-cswgt" not found
	I0813 20:25:46.745003   78216 pod_ready.go:81] duration metric: took 1.011899948s waiting for pod "coredns-558bd4d5db-cswgt" in "kube-system" namespace to be "Ready" ...
	E0813 20:25:46.745019   78216 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-cswgt" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-cswgt" not found
	I0813 20:25:46.745025   78216 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace to be "Ready" ...
	I0813 20:25:46.745072   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:46.745083   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:46.745090   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:46.745096   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:46.758952   78216 round_trippers.go:457] Response Status: 200 OK in 13 milliseconds
	I0813 20:25:46.758972   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:46.758978   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:46 GMT
	I0813 20:25:46.758983   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:46.758993   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:46.758997   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:46.759002   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:46.759368   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:46.759920   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:46.759943   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:46.759951   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:46.759956   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:46.761689   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:46.761706   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:46.761711   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:46.761716   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:46 GMT
	I0813 20:25:46.761719   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:46.761724   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:46.761729   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:46.761873   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:47.262632   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:47.262654   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:47.262662   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:47.262668   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:47.265042   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:47.265064   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:47.265071   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:47.265075   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:47.265080   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:47.265084   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:47.265088   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:47 GMT
	I0813 20:25:47.265214   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:47.265659   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:47.265678   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:47.265685   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:47.265691   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:47.267442   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:47.267477   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:47.267485   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:47.267494   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:47.267499   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:47.267507   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:47.267512   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:47 GMT
	I0813 20:25:47.268025   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:47.762456   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:47.762481   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:47.762487   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:47.762491   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:47.764453   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:47.764480   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:47.764488   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:47.764494   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:47.764499   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:47 GMT
	I0813 20:25:47.764505   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:47.764510   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:47.764609   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:47.764941   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:47.764958   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:47.764963   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:47.764967   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:47.766736   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:47.766756   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:47.766762   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:47 GMT
	I0813 20:25:47.766767   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:47.766772   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:47.766776   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:47.766781   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:47.766896   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:48.262496   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:48.262520   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:48.262526   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:48.262530   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:48.264688   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:48.264708   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:48.264714   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:48.264720   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:48.264724   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:48.264729   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:48.264733   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:48 GMT
	I0813 20:25:48.264840   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:48.265149   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:48.265162   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:48.265167   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:48.265170   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:48.266777   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:48.266795   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:48.266801   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:48.266806   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:48.266810   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:48.266813   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:48.266816   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:48 GMT
	I0813 20:25:48.266914   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:48.762450   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:48.762474   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:48.762480   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:48.762484   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:48.764459   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:48.764485   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:48.764491   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:48.764496   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:48 GMT
	I0813 20:25:48.764501   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:48.764505   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:48.764510   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:48.764650   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:48.765011   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:48.765025   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:48.765029   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:48.765033   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:48.766735   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:48.766752   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:48.766756   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:48.766759   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:48.766764   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:48.766768   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:48.766773   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:48 GMT
	I0813 20:25:48.766886   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:48.767123   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:49.262326   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:49.262351   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:49.262358   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:49.262364   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:49.264539   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:49.264558   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:49.264563   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:49.264567   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:49 GMT
	I0813 20:25:49.264570   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:49.264573   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:49.264576   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:49.264651   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:49.264972   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:49.264984   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:49.264988   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:49.264992   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:49.266633   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:49.266647   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:49.266651   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:49.266654   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:49.266658   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:49.266662   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:49.266667   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:49 GMT
	I0813 20:25:49.266818   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:49.762391   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:49.762442   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:49.762448   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:49.762453   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:49.764540   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:49.764561   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:49.764568   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:49.764572   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:49.764577   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:49.764582   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:49 GMT
	I0813 20:25:49.764586   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:49.764677   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:49.764983   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:49.764997   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:49.765002   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:49.765006   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:49.766552   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:49.766570   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:49.766574   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:49.766578   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:49 GMT
	I0813 20:25:49.766581   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:49.766584   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:49.766587   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:49.766687   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:50.262287   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:50.262314   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:50.262321   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:50.262326   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:50.267364   78216 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 20:25:50.267395   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:50.267400   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:50.267404   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:50.267409   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:50.267413   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:50.267418   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:50 GMT
	I0813 20:25:50.267515   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:50.267907   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:50.267921   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:50.267925   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:50.267929   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:50.269737   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:50.269756   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:50.269763   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:50.269767   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:50 GMT
	I0813 20:25:50.269772   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:50.269776   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:50.269778   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:50.269938   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:50.762559   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:50.762580   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:50.762586   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:50.762590   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:50.764861   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:50.764884   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:50.764893   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:50.764898   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:50.764903   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:50.764908   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:50.764914   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:50 GMT
	I0813 20:25:50.765048   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:50.765368   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:50.765384   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:50.765389   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:50.765393   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:50.767003   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:50.767022   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:50.767029   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:50.767034   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:50.767039   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:50 GMT
	I0813 20:25:50.767045   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:50.767050   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:50.767152   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:50.767444   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:51.262658   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:51.262680   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:51.262686   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:51.262691   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:51.264853   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:51.264871   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:51.264875   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:51.264879   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:51.264882   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:51.264884   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:51.264888   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:51 GMT
	I0813 20:25:51.264973   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:51.265296   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:51.265317   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:51.265323   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:51.265329   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:51.266909   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:51.266928   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:51.266934   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:51.266939   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:51.266943   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:51.266948   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:51.266951   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:51 GMT
	I0813 20:25:51.267034   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:51.762883   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:51.762913   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:51.762921   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:51.762926   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:51.764915   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:51.764937   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:51.764944   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:51.764950   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:51 GMT
	I0813 20:25:51.764955   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:51.764961   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:51.764966   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:51.765055   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:51.765387   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:51.765404   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:51.765412   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:51.765419   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:51.766980   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:51.767003   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:51.767010   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:51.767015   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:51.767022   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:51 GMT
	I0813 20:25:51.767026   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:51.767030   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:51.767123   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:52.262741   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:52.262769   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:52.262776   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:52.262782   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:52.264789   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:52.264812   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:52.264820   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:52.264825   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:52.264830   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:52.264835   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:52.264839   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:52 GMT
	I0813 20:25:52.264933   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:52.265242   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:52.265254   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:52.265259   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:52.265263   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:52.266900   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:52.266926   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:52.266933   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:52.266939   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:52.266944   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:52.266948   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:52.266956   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:52 GMT
	I0813 20:25:52.267043   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:52.762609   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:52.762632   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:52.762638   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:52.762643   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:52.764892   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:52.764912   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:52.764919   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:52.764924   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:52.764929   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:52 GMT
	I0813 20:25:52.764934   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:52.764938   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:52.765048   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:52.765375   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:52.765390   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:52.765397   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:52.765403   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:52.767076   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:52.767093   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:52.767099   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:52.767103   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:52 GMT
	I0813 20:25:52.767107   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:52.767111   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:52.767116   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:52.767213   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:52.767533   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:53.262857   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:53.262879   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:53.262885   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:53.262889   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:53.265073   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:53.265092   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:53.265099   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:53.265104   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:53.265108   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:53.265113   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:53.265120   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:53 GMT
	I0813 20:25:53.265204   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:53.265567   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:53.265581   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:53.265586   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:53.265589   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:53.267190   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:53.267205   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:53.267209   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:53.267213   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:53.267216   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:53.267219   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:53 GMT
	I0813 20:25:53.267221   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:53.267310   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:53.762970   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:53.762997   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:53.763003   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:53.763007   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:53.765251   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:53.765269   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:53.765281   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:53.765285   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:53.765289   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:53.765295   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:53.765301   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:53 GMT
	I0813 20:25:53.765424   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:53.765822   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:53.765838   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:53.765842   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:53.765846   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:53.767515   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:53.767532   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:53.767539   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:53.767543   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:53.767548   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:53.767553   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:53.767557   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:53 GMT
	I0813 20:25:53.767639   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:54.263272   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:54.263296   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:54.263301   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:54.263306   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:54.306013   78216 round_trippers.go:457] Response Status: 200 OK in 42 milliseconds
	I0813 20:25:54.306038   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:54.306045   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:54.306049   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:54.306054   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:54.306058   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:54.306063   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:54 GMT
	I0813 20:25:54.306173   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:54.306544   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:54.306559   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:54.306564   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:54.306569   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:54.308297   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:54.308319   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:54.308326   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:54.308331   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:54.308338   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:54.308343   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:54.308362   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:54 GMT
	I0813 20:25:54.308450   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:54.763052   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:54.763073   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:54.763079   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:54.763083   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:54.765365   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:54.765382   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:54.765386   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:54.765390   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:54.765451   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:54.765460   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:54.765466   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:54 GMT
	I0813 20:25:54.765577   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:54.765890   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:54.765903   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:54.765908   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:54.765911   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:54.767472   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:54.767489   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:54.767495   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:54.767500   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:54.767504   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:54.767508   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:54.767512   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:54 GMT
	I0813 20:25:54.767592   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:54.767817   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:55.262297   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:55.262320   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:55.262325   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:55.262330   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:55.264529   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:55.264548   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:55.264554   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:55.264558   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:55.264561   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:55.264565   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:55.264568   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:55 GMT
	I0813 20:25:55.264713   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:55.265135   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:55.265160   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:55.265166   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:55.265172   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:55.266907   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:55.266925   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:55.266932   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:55.266936   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:55.266946   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:55.266951   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:55.266959   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:55 GMT
	I0813 20:25:55.267061   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:55.762578   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:55.762598   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:55.762604   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:55.762608   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:55.764702   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:55.764718   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:55.764723   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:55.764726   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:55.764729   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:55.764732   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:55.764735   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:55 GMT
	I0813 20:25:55.764820   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:55.765123   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:55.765135   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:55.765140   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:55.765144   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:55.766898   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:55.766921   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:55.766929   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:55.766935   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:55.766941   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:55.766952   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:55.766958   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:55 GMT
	I0813 20:25:55.767062   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:56.262639   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:56.262668   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:56.262674   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:56.262678   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:56.264860   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:56.264884   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:56.264889   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:56.264893   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:56 GMT
	I0813 20:25:56.264896   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:56.264899   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:56.264903   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:56.265008   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:56.265334   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:56.265348   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:56.265353   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:56.265358   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:56.267108   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:56.267126   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:56.267133   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:56.267137   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:56.267141   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:56.267144   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:56.267147   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:56 GMT
	I0813 20:25:56.267230   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:56.762951   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:56.762982   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:56.762990   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:56.762996   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:56.765622   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:56.765645   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:56.765651   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:56.765656   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:56.765660   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:56.765666   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:56 GMT
	I0813 20:25:56.765670   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:56.765823   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:56.766139   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:56.766151   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:56.766156   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:56.766160   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:56.767835   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:56.767853   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:56.767859   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:56.767867   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:56.767872   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:56.767876   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:56.767880   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:56 GMT
	I0813 20:25:56.768050   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:56.768278   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:57.262660   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:57.262686   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:57.262692   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:57.262696   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:57.264890   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:57.264937   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:57.264945   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:57.264950   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:57.264954   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:57 GMT
	I0813 20:25:57.264958   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:57.264963   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:57.265054   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:57.265357   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:57.265370   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:57.265375   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:57.265379   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:57.267058   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:57.267077   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:57.267083   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:57.267088   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:57.267092   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:57 GMT
	I0813 20:25:57.267096   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:57.267101   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:57.267185   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:57.762780   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:57.762805   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:57.762811   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:57.762815   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:57.764941   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:57.764958   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:57.764968   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:57.764973   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:57.764978   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:57.764982   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:57.764987   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:57 GMT
	I0813 20:25:57.765071   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:57.765378   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:57.765390   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:57.765395   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:57.765398   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:57.767143   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:57.767162   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:57.767168   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:57.767173   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:57 GMT
	I0813 20:25:57.767178   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:57.767182   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:57.767185   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:57.767324   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:58.262949   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:58.262989   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:58.262996   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:58.263002   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:58.265147   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:58.265175   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:58.265184   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:58.265190   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:58.265195   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:58 GMT
	I0813 20:25:58.265201   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:58.265207   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:58.265388   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:58.265756   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:58.265775   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:58.265782   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:58.265787   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:58.267368   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:58.267383   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:58.267388   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:58.267391   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:58.267395   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:58 GMT
	I0813 20:25:58.267401   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:58.267405   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:58.267482   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:58.763152   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:58.763179   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:58.763184   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:58.763188   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:58.765675   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:58.765700   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:58.765706   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:58.765710   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:58.765713   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:58.765718   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:58.765724   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:58 GMT
	I0813 20:25:58.765821   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:58.766189   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:58.766206   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:58.766211   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:58.766215   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:58.767854   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:58.767883   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:58.767888   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:58 GMT
	I0813 20:25:58.767892   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:58.767895   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:58.767898   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:58.767901   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:58.768024   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:58.768296   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:25:59.262722   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:59.262750   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:59.262759   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:59.262765   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:59.264986   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:59.265004   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:59.265008   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:59.265012   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:59.265015   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:59.265018   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:59.265021   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:59 GMT
	I0813 20:25:59.265109   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:59.265432   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:59.265447   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:59.265452   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:59.265456   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:59.267080   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:59.267095   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:59.267099   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:59 GMT
	I0813 20:25:59.267103   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:59.267106   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:59.267111   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:59.267114   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:59.267191   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:25:59.762746   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:25:59.762780   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:59.762787   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:59.762791   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:59.765012   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:25:59.765036   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:59.765043   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:59.765048   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:59 GMT
	I0813 20:25:59.765053   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:59.765058   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:59.765063   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:59.765150   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:25:59.765466   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:25:59.765512   78216 round_trippers.go:438] Request Headers:
	I0813 20:25:59.765537   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:25:59.765542   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:25:59.767191   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:25:59.767209   78216 round_trippers.go:460] Response Headers:
	I0813 20:25:59.767214   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:25:59.767218   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:25:59.767221   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:25:59 GMT
	I0813 20:25:59.767224   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:25:59.767228   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:25:59.767303   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:00.262952   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:00.262978   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:00.262984   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:00.262989   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:00.265165   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:00.265182   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:00.265187   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:00.265191   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:00.265194   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:00.265198   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:00.265202   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:00 GMT
	I0813 20:26:00.265300   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:00.265666   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:00.265681   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:00.265686   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:00.265689   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:00.267290   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:00.267317   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:00.267323   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:00.267328   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:00.267333   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:00.267337   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:00.267342   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:00 GMT
	I0813 20:26:00.267485   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:00.762722   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:00.762746   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:00.762751   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:00.762755   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:00.765130   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:00.765150   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:00.765156   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:00.765161   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:00.765166   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:00.765170   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:00.765175   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:00 GMT
	I0813 20:26:00.765262   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:00.765611   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:00.765623   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:00.765628   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:00.765633   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:00.767200   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:00.767215   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:00.767219   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:00.767222   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:00.767225   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:00.767228   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:00.767231   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:00 GMT
	I0813 20:26:00.767306   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:01.262952   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:01.262977   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:01.262989   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:01.262993   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:01.265031   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:01.265050   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:01.265056   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:01.265060   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:01.265063   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:01.265066   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:01.265070   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:01 GMT
	I0813 20:26:01.265183   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:01.265521   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:01.265545   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:01.265550   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:01.265554   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:01.267267   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:01.267284   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:01.267289   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:01.267294   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:01.267298   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:01.267303   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:01.267308   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:01 GMT
	I0813 20:26:01.267406   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:01.267629   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:26:01.763259   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:01.763283   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:01.763289   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:01.763293   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:01.765442   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:01.765460   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:01.765465   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:01.765468   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:01.765471   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:01.765474   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:01 GMT
	I0813 20:26:01.765477   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:01.765591   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:01.765985   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:01.766001   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:01.766009   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:01.766014   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:01.767742   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:01.767758   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:01.767763   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:01.767766   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:01.767769   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:01.767790   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:01 GMT
	I0813 20:26:01.767796   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:01.767891   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:02.262574   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:02.262598   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:02.262603   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:02.262607   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:02.264833   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:02.264855   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:02.264861   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:02.264866   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:02 GMT
	I0813 20:26:02.264871   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:02.264876   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:02.264880   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:02.265068   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:02.265375   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:02.265387   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:02.265392   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:02.265396   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:02.267070   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:02.267092   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:02.267097   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:02.267101   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:02.267104   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:02.267107   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:02.267110   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:02 GMT
	I0813 20:26:02.267199   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:02.762751   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:02.762780   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:02.762787   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:02.762792   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:02.765165   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:02.765189   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:02.765195   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:02.765199   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:02.765202   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:02.765205   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:02.765209   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:02 GMT
	I0813 20:26:02.765312   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:02.765852   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:02.765878   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:02.765888   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:02.765895   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:02.767628   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:02.767646   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:02.767650   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:02.767653   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:02.767656   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:02 GMT
	I0813 20:26:02.767659   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:02.767662   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:02.767736   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:03.262322   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:03.262347   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:03.262354   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:03.262359   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:03.264437   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:03.264459   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:03.264467   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:03.264471   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:03.264475   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:03.264478   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:03.264482   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:03 GMT
	I0813 20:26:03.264576   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:03.264976   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:03.264990   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:03.264995   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:03.264999   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:03.266630   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:03.266647   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:03.266655   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:03.266660   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:03.266664   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:03.266669   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:03 GMT
	I0813 20:26:03.266674   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:03.266761   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:03.762329   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:03.762353   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:03.762359   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:03.762363   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:03.764504   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:03.764527   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:03.764532   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:03.764536   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:03 GMT
	I0813 20:26:03.764539   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:03.764543   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:03.764546   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:03.764647   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:03.765006   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:03.765019   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:03.765024   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:03.765028   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:03.766654   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:03.766672   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:03.766686   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:03.766690   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:03.766693   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:03.766697   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:03.766701   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:03 GMT
	I0813 20:26:03.766812   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:03.767051   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:26:04.262375   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:04.262405   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:04.262413   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:04.262419   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:04.264455   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:04.264474   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:04.264480   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:04.264491   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:04.264496   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:04.264500   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:04.264504   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:04 GMT
	I0813 20:26:04.264589   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:04.264929   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:04.264944   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:04.264951   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:04.264956   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:04.266627   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:04.266644   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:04.266652   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:04.266657   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:04 GMT
	I0813 20:26:04.266666   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:04.266671   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:04.266676   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:04.266777   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:04.762328   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:04.762352   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:04.762358   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:04.762362   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:04.764770   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:04.764789   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:04.764794   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:04.764798   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:04 GMT
	I0813 20:26:04.764801   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:04.764805   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:04.764808   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:04.764903   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:04.765229   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:04.765252   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:04.765256   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:04.765260   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:04.768054   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:04.768074   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:04.768080   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:04.768084   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:04.768088   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:04 GMT
	I0813 20:26:04.768091   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:04.768094   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:04.768210   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:05.262929   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:05.262966   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:05.262974   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:05.262979   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:05.265199   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:05.265219   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:05.265225   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:05 GMT
	I0813 20:26:05.265229   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:05.265232   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:05.265235   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:05.265238   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:05.265345   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:05.265734   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:05.265750   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:05.265759   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:05.265763   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:05.267397   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:05.267417   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:05.267423   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:05.267429   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:05.267433   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:05.267437   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:05.267448   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:05 GMT
	I0813 20:26:05.267560   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:05.763168   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:05.763191   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:05.763196   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:05.763200   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:05.765416   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:05.765432   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:05.765436   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:05.765446   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:05.765449   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:05.765452   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:05.765459   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:05 GMT
	I0813 20:26:05.765600   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:05.765938   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:05.765958   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:05.765963   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:05.765967   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:05.767602   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:05.767621   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:05.767626   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:05.767629   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:05.767632   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:05.767635   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:05.767639   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:05 GMT
	I0813 20:26:05.767768   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:05.768059   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:26:06.263074   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:06.263097   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:06.263103   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:06.263107   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:06.265334   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:06.265355   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:06.265360   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:06.265364   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:06.265367   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:06.265370   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:06.265374   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:06 GMT
	I0813 20:26:06.265512   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:06.265875   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:06.265890   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:06.265894   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:06.265898   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:06.267703   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:06.267722   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:06.267729   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:06.267734   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:06 GMT
	I0813 20:26:06.267738   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:06.267742   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:06.267746   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:06.267856   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:06.762622   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:06.762646   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:06.762652   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:06.762656   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:06.765002   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:06.765026   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:06.765041   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:06.765047   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:06.765052   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:06.765057   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:06.765063   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:06 GMT
	I0813 20:26:06.765189   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:06.765583   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:06.765602   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:06.765609   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:06.765615   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:06.767319   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:06.767339   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:06.767345   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:06.767351   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:06.767355   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:06 GMT
	I0813 20:26:06.767360   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:06.767364   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:06.767487   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:07.262999   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:07.263025   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:07.263031   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:07.263036   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:07.265120   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:07.265141   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:07.265148   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:07.265154   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:07.265159   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:07.265164   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:07.265169   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:07 GMT
	I0813 20:26:07.265300   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:07.265667   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:07.265682   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:07.265687   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:07.265691   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:07.267320   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:07.267342   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:07.267349   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:07.267355   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:07.267360   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:07.267367   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:07.267373   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:07 GMT
	I0813 20:26:07.267492   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:07.763096   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:07.763120   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:07.763125   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:07.763129   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:07.765162   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:07.765184   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:07.765191   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:07.765210   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:07.765216   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:07.765221   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:07.765225   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:07 GMT
	I0813 20:26:07.765356   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:07.765775   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:07.765791   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:07.765796   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:07.765799   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:07.767428   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:07.767448   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:07.767454   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:07.767459   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:07 GMT
	I0813 20:26:07.767463   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:07.767468   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:07.767472   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:07.767588   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:08.263226   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:08.263251   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:08.263257   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:08.263261   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:08.265359   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:08.265377   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:08.265383   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:08.265387   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:08.265390   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:08.265393   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:08.265396   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:08 GMT
	I0813 20:26:08.265580   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:08.266201   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:08.266223   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:08.266229   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:08.266233   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:08.267867   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:08.267884   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:08.267890   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:08.267895   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:08 GMT
	I0813 20:26:08.267900   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:08.267914   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:08.267919   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:08.268025   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:08.268299   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:26:08.762577   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:08.762605   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:08.762611   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:08.762616   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:08.765004   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:08.765022   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:08.765029   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:08.765034   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:08.765038   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:08.765042   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:08.765047   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:08 GMT
	I0813 20:26:08.765137   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:08.765481   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:08.765533   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:08.765541   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:08.765547   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:08.767255   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:08.767271   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:08.767276   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:08 GMT
	I0813 20:26:08.767279   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:08.767282   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:08.767284   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:08.767287   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:08.767401   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:09.263305   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:09.263332   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:09.263338   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:09.263342   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:09.265512   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:09.265542   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:09.265550   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:09.265555   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:09.265560   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:09.265564   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:09.265569   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:09 GMT
	I0813 20:26:09.265671   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:09.266013   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:09.266028   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:09.266034   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:09.266040   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:09.267765   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:09.267792   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:09.267798   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:09.267804   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:09.267807   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:09.267810   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:09.267814   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:09 GMT
	I0813 20:26:09.267946   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:09.762671   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:09.762700   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:09.762707   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:09.762711   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:09.764854   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:09.764883   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:09.764890   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:09.764904   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:09.764908   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:09.764911   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:09.764914   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:09 GMT
	I0813 20:26:09.765038   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:09.765378   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:09.765391   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:09.765396   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:09.765400   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:09.767040   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:09.767053   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:09.767058   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:09.767061   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:09.767064   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:09.767068   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:09.767081   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:09 GMT
	I0813 20:26:09.767225   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:10.262874   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:10.262906   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:10.262914   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:10.262920   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:10.264955   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:10.264982   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:10.264990   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:10.264995   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:10.265000   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:10.265005   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:10.265009   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:10 GMT
	I0813 20:26:10.265164   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:10.265535   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:10.265551   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:10.265556   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:10.265561   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:10.267210   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:10.267231   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:10.267238   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:10.267243   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:10.267248   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:10 GMT
	I0813 20:26:10.267252   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:10.267257   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:10.267380   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:10.762737   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:10.762758   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:10.762764   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:10.762768   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:10.765037   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:10.765057   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:10.765062   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:10.765065   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:10.765068   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:10.765071   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:10.765075   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:10 GMT
	I0813 20:26:10.765220   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:10.765640   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:10.765656   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:10.765661   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:10.765665   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:10.767390   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:10.767406   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:10.767412   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:10.767417   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:10.767420   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:10 GMT
	I0813 20:26:10.767425   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:10.767429   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:10.767555   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:10.767801   78216 pod_ready.go:102] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"False"
	I0813 20:26:11.263175   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:11.263200   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.263206   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.263211   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.265331   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:11.265352   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.265360   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.265364   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.265367   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.265371   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.265375   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.265536   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"446","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5627 chars]
	I0813 20:26:11.265882   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:11.265896   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.265901   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.265911   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.267517   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.267535   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.267541   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.267545   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.267548   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.267551   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.267554   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.267664   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:11.762562   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:11.762584   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.762590   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.762595   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.764291   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.764312   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.764319   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.764324   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.764329   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.764333   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.764338   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.764502   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"504","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5735 chars]
	I0813 20:26:11.764845   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:11.764858   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.764863   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.764867   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.766378   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.766395   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.766401   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.766406   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.766411   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.766416   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.766421   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.766520   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:11.766793   78216 pod_ready.go:92] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:11.766811   78216 pod_ready.go:81] duration metric: took 25.021779474s waiting for pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.766820   78216 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.766882   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813202501-13784
	I0813 20:26:11.766893   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.766899   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.766905   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.768309   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.768336   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.768342   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.768348   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.768353   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.768366   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.768374   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.768507   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813202501-13784","namespace":"kube-system","uid":"019ebe07-83e4-44a1-a5c0-c1fd5f4d32bc","resourceVersion":"326","creationTimestamp":"2021-08-13T20:25:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"29b46eb226f31ece96d42b406a7c6fc4","kubernetes.io/config.mirror":"29b46eb226f31ece96d42b406a7c6fc4","kubernetes.io/config.seen":"2021-08-13T20:25:32.303988407Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash" [truncated 5559 chars]
	I0813 20:26:11.768850   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:11.768866   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.768873   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.768879   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.770339   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.770355   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.770361   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.770366   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.770370   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.770375   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.770379   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.770460   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:11.770707   78216 pod_ready.go:92] pod "etcd-multinode-20210813202501-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:11.770719   78216 pod_ready.go:81] duration metric: took 3.89251ms waiting for pod "etcd-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.770731   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.770769   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813202501-13784
	I0813 20:26:11.770777   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.770781   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.770784   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.772297   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.772314   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.772321   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.772325   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.772330   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.772334   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.772339   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.772471   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813202501-13784","namespace":"kube-system","uid":"87fefdcc-c5d1-42d3-991a-ad28c4f7a669","resourceVersion":"355","creationTimestamp":"2021-08-13T20:25:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"a494d4f8a8d6a2671115f307173e8700","kubernetes.io/config.mirror":"a494d4f8a8d6a2671115f307173e8700","kubernetes.io/config.seen":"2021-08-13T20:25:32.303990597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8088 chars]
	I0813 20:26:11.772868   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:11.772889   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.772896   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.772902   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.774312   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.774349   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.774356   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.774361   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.774366   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.774372   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.774377   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.774518   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:11.774772   78216 pod_ready.go:92] pod "kube-apiserver-multinode-20210813202501-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:11.774785   78216 pod_ready.go:81] duration metric: took 4.048306ms waiting for pod "kube-apiserver-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.774794   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.774833   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813202501-13784
	I0813 20:26:11.774841   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.774845   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.774849   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.776239   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.776255   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.776261   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.776265   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.776269   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.776274   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.776278   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.776383   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813202501-13784","namespace":"kube-system","uid":"03314bb8-93d9-4da5-b960-3580e4f5089a","resourceVersion":"289","creationTimestamp":"2021-08-13T20:25:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c8d2d20e82b9f18ce43c33bb9529b104","kubernetes.io/config.mirror":"c8d2d20e82b9f18ce43c33bb9529b104","kubernetes.io/config.seen":"2021-08-13T20:25:32.303992549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 7654 chars]
	I0813 20:26:11.776694   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:11.776708   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.776715   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.776721   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.778289   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.778308   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.778314   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.778319   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.778325   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.778330   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.778335   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.778483   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:11.778807   78216 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813202501-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:11.778829   78216 pod_ready.go:81] duration metric: took 4.027444ms waiting for pod "kube-controller-manager-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.778841   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5qfxb" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.778904   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qfxb
	I0813 20:26:11.778916   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.778924   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.778933   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.780421   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.780437   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.780442   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.780466   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.780475   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.780479   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.780487   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.780602   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5qfxb","generateName":"kube-proxy-","namespace":"kube-system","uid":"098e1fdf-be73-4c00-af36-bb0432215045","resourceVersion":"476","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8e8d5a52-c8fa-4a19-9f85-fd1c335f478a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e8d5a52-c8fa-4a19-9f85-fd1c335f478a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5754 chars]
	I0813 20:26:11.781667   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:11.781690   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.781699   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.781714   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.783194   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:11.783211   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.783216   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.783221   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.783237   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.783242   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.783247   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.783359   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:11.783622   78216 pod_ready.go:92] pod "kube-proxy-5qfxb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:11.783637   78216 pod_ready.go:81] duration metric: took 4.780146ms waiting for pod "kube-proxy-5qfxb" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.783647   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:11.962701   78216 request.go:600] Waited for 178.976333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202501-13784
	I0813 20:26:11.962771   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202501-13784
	I0813 20:26:11.962779   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:11.962788   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:11.962823   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:11.964990   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:11.965015   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:11.965022   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:11.965027   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:11.965030   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:11.965034   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:11.965037   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:11 GMT
	I0813 20:26:11.965177   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813202501-13784","namespace":"kube-system","uid":"89df0c7c-6465-4ac8-ae60-a2fdb61756f7","resourceVersion":"291","creationTimestamp":"2021-08-13T20:25:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7694bd83a6b91cca55d5de526505eb47","kubernetes.io/config.mirror":"7694bd83a6b91cca55d5de526505eb47","kubernetes.io/config.seen":"2021-08-13T20:25:17.563985654Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4536 chars]
	I0813 20:26:12.162768   78216 request.go:600] Waited for 197.283815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:12.162842   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:12.162848   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:12.162853   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:12.162857   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:12.164635   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:12.164656   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:12.164664   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:12.164669   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:12.164674   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:12.164679   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:12.164683   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:12 GMT
	I0813 20:26:12.164803   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:12.165072   78216 pod_ready.go:92] pod "kube-scheduler-multinode-20210813202501-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:12.165083   78216 pod_ready.go:81] duration metric: took 381.428416ms waiting for pod "kube-scheduler-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:12.165090   78216 pod_ready.go:38] duration metric: took 26.445202364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:26:12.165142   78216 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:26:12.165210   78216 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:26:12.185055   78216 command_runner.go:124] > 1303
	I0813 20:26:12.185776   78216 api_server.go:70] duration metric: took 26.576679382s to wait for apiserver process to appear ...
	I0813 20:26:12.185807   78216 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:26:12.185816   78216 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:26:12.190168   78216 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:26:12.190239   78216 round_trippers.go:432] GET https://192.168.49.2:8443/version?timeout=32s
	I0813 20:26:12.190247   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:12.190252   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:12.190257   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:12.190967   78216 round_trippers.go:457] Response Status: 200 OK in 0 milliseconds
	I0813 20:26:12.190989   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:12.190996   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:12.191002   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:12.191007   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:12.191013   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:12.191021   78216 round_trippers.go:463]     Content-Length: 263
	I0813 20:26:12.191028   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:12 GMT
	I0813 20:26:12.191057   78216 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0813 20:26:12.191153   78216 api_server.go:139] control plane version: v1.21.3
	I0813 20:26:12.191171   78216 api_server.go:129] duration metric: took 5.359018ms to wait for apiserver health ...
	I0813 20:26:12.191181   78216 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:26:12.363555   78216 request.go:600] Waited for 172.309538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 20:26:12.363625   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 20:26:12.363631   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:12.363636   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:12.363640   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:12.366515   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:12.366539   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:12.366545   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:12.366551   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:12.366555   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:12 GMT
	I0813 20:26:12.366558   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:12.366562   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:12.367136   78216 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"504","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 54508 chars]
	I0813 20:26:12.368318   78216 system_pods.go:59] 8 kube-system pods found
	I0813 20:26:12.368340   78216 system_pods.go:61] "coredns-558bd4d5db-z5bmn" [217d65d1-6fe4-48d9-954f-7246653dbdd4] Running
	I0813 20:26:12.368345   78216 system_pods.go:61] "etcd-multinode-20210813202501-13784" [019ebe07-83e4-44a1-a5c0-c1fd5f4d32bc] Running
	I0813 20:26:12.368348   78216 system_pods.go:61] "kindnet-2k7g6" [f974e6cf-6607-465e-9a82-c175afed7c99] Running
	I0813 20:26:12.368352   78216 system_pods.go:61] "kube-apiserver-multinode-20210813202501-13784" [87fefdcc-c5d1-42d3-991a-ad28c4f7a669] Running
	I0813 20:26:12.368356   78216 system_pods.go:61] "kube-controller-manager-multinode-20210813202501-13784" [03314bb8-93d9-4da5-b960-3580e4f5089a] Running
	I0813 20:26:12.368362   78216 system_pods.go:61] "kube-proxy-5qfxb" [098e1fdf-be73-4c00-af36-bb0432215045] Running
	I0813 20:26:12.368366   78216 system_pods.go:61] "kube-scheduler-multinode-20210813202501-13784" [89df0c7c-6465-4ac8-ae60-a2fdb61756f7] Running
	I0813 20:26:12.368369   78216 system_pods.go:61] "storage-provisioner" [ee46d1da-a4a7-46b9-9ebe-6f53aef7c220] Running
	I0813 20:26:12.368375   78216 system_pods.go:74] duration metric: took 177.187155ms to wait for pod list to return data ...
	I0813 20:26:12.368383   78216 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:26:12.562713   78216 request.go:600] Waited for 194.257243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0813 20:26:12.562767   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0813 20:26:12.562780   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:12.562789   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:12.562802   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:12.564931   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:12.564948   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:12.564953   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:12.564956   78216 round_trippers.go:463]     Content-Length: 304
	I0813 20:26:12.564960   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:12 GMT
	I0813 20:26:12.564962   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:12.564965   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:12.564968   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:12.564984   78216 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"36d3ff71-0eef-4045-a452-35fe2529e1ad","resourceVersion":"395","creationTimestamp":"2021-08-13T20:25:44Z"},"secrets":[{"name":"default-token-jk6rr"}]}]}
	I0813 20:26:12.565544   78216 default_sa.go:45] found service account: "default"
	I0813 20:26:12.565560   78216 default_sa.go:55] duration metric: took 197.17261ms for default service account to be created ...
	I0813 20:26:12.565568   78216 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:26:12.762974   78216 request.go:600] Waited for 197.328871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 20:26:12.763029   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 20:26:12.763037   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:12.763044   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:12.763050   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:12.766024   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:12.766042   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:12.766049   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:12.766054   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:12.766059   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:12.766064   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:12.766068   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:12 GMT
	I0813 20:26:12.766607   78216 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"504","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 54508 chars]
	I0813 20:26:12.767840   78216 system_pods.go:86] 8 kube-system pods found
	I0813 20:26:12.767862   78216 system_pods.go:89] "coredns-558bd4d5db-z5bmn" [217d65d1-6fe4-48d9-954f-7246653dbdd4] Running
	I0813 20:26:12.767869   78216 system_pods.go:89] "etcd-multinode-20210813202501-13784" [019ebe07-83e4-44a1-a5c0-c1fd5f4d32bc] Running
	I0813 20:26:12.767873   78216 system_pods.go:89] "kindnet-2k7g6" [f974e6cf-6607-465e-9a82-c175afed7c99] Running
	I0813 20:26:12.767878   78216 system_pods.go:89] "kube-apiserver-multinode-20210813202501-13784" [87fefdcc-c5d1-42d3-991a-ad28c4f7a669] Running
	I0813 20:26:12.767884   78216 system_pods.go:89] "kube-controller-manager-multinode-20210813202501-13784" [03314bb8-93d9-4da5-b960-3580e4f5089a] Running
	I0813 20:26:12.767891   78216 system_pods.go:89] "kube-proxy-5qfxb" [098e1fdf-be73-4c00-af36-bb0432215045] Running
	I0813 20:26:12.767898   78216 system_pods.go:89] "kube-scheduler-multinode-20210813202501-13784" [89df0c7c-6465-4ac8-ae60-a2fdb61756f7] Running
	I0813 20:26:12.767902   78216 system_pods.go:89] "storage-provisioner" [ee46d1da-a4a7-46b9-9ebe-6f53aef7c220] Running
	I0813 20:26:12.767909   78216 system_pods.go:126] duration metric: took 202.336283ms to wait for k8s-apps to be running ...
	I0813 20:26:12.767918   78216 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:26:12.767965   78216 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:26:12.777157   78216 system_svc.go:56] duration metric: took 9.233536ms WaitForService to wait for kubelet.
	I0813 20:26:12.777190   78216 kubeadm.go:547] duration metric: took 27.168082932s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:26:12.777235   78216 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:26:12.962604   78216 request.go:600] Waited for 185.280757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0813 20:26:12.962661   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0813 20:26:12.962667   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:12.962672   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:12.962677   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:12.964782   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:12.964800   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:12.964806   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:12.964810   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:12.964814   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:12.964818   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:12.964822   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:12 GMT
	I0813 20:26:12.964964   78216 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 6653 chars]
	I0813 20:26:12.966050   78216 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:26:12.966075   78216 node_conditions.go:123] node cpu capacity is 8
	I0813 20:26:12.966140   78216 node_conditions.go:105] duration metric: took 188.899136ms to run NodePressure ...
	I0813 20:26:12.966185   78216 start.go:231] waiting for startup goroutines ...
	I0813 20:26:12.968691   78216 out.go:177] 
	I0813 20:26:12.968952   78216 config.go:177] Loaded profile config "multinode-20210813202501-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:26:12.969061   78216 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/config.json ...
	I0813 20:26:12.971470   78216 out.go:177] * Starting node multinode-20210813202501-13784-m02 in cluster multinode-20210813202501-13784
	I0813 20:26:12.971504   78216 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:26:12.972969   78216 out.go:177] * Pulling base image ...
	I0813 20:26:12.972994   78216 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:26:12.973011   78216 cache.go:56] Caching tarball of preloaded images
	I0813 20:26:12.973097   78216 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:26:12.973156   78216 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:26:12.973178   78216 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:26:12.973277   78216 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/config.json ...
	I0813 20:26:13.059855   78216 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:26:13.059884   78216 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:26:13.059900   78216 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:26:13.059931   78216 start.go:313] acquiring machines lock for multinode-20210813202501-13784-m02: {Name:mk5163917ac00824513abaea303a60f12a4be88c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:26:13.060079   78216 start.go:317] acquired machines lock for "multinode-20210813202501-13784-m02" in 111.625µs
	I0813 20:26:13.060106   78216 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813202501-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202501-13784 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 20:26:13.060180   78216 start.go:126] createHost starting for "m02" (driver="docker")
	I0813 20:26:13.062467   78216 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 20:26:13.062568   78216 start.go:160] libmachine.API.Create for "multinode-20210813202501-13784" (driver="docker")
	I0813 20:26:13.062594   78216 client.go:168] LocalClient.Create starting
	I0813 20:26:13.062648   78216 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:26:13.062672   78216 main.go:130] libmachine: Decoding PEM data...
	I0813 20:26:13.062689   78216 main.go:130] libmachine: Parsing certificate...
	I0813 20:26:13.062799   78216 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:26:13.062817   78216 main.go:130] libmachine: Decoding PEM data...
	I0813 20:26:13.062827   78216 main.go:130] libmachine: Parsing certificate...
	I0813 20:26:13.063087   78216 cli_runner.go:115] Run: docker network inspect multinode-20210813202501-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:26:13.100125   78216 network_create.go:67] Found existing network {name:multinode-20210813202501-13784 subnet:0xc000ca46c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0813 20:26:13.100163   78216 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20210813202501-13784-m02" container
	I0813 20:26:13.100211   78216 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:26:13.135516   78216 cli_runner.go:115] Run: docker volume create multinode-20210813202501-13784-m02 --label name.minikube.sigs.k8s.io=multinode-20210813202501-13784-m02 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:26:13.173338   78216 oci.go:102] Successfully created a docker volume multinode-20210813202501-13784-m02
	I0813 20:26:13.173439   78216 cli_runner.go:115] Run: docker run --rm --name multinode-20210813202501-13784-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210813202501-13784-m02 --entrypoint /usr/bin/test -v multinode-20210813202501-13784-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:26:13.864775   78216 oci.go:106] Successfully prepared a docker volume multinode-20210813202501-13784-m02
	W0813 20:26:13.864832   78216 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:26:13.864841   78216 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:26:13.864854   78216 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:26:13.864890   78216 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:26:13.864896   78216 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:26:13.864967   78216 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210813202501-13784-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:26:13.946177   78216 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210813202501-13784-m02 --name multinode-20210813202501-13784-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210813202501-13784-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210813202501-13784-m02 --network multinode-20210813202501-13784 --ip 192.168.49.3 --volume multinode-20210813202501-13784-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:26:14.467921   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784-m02 --format={{.State.Running}}
	I0813 20:26:14.511869   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784-m02 --format={{.State.Status}}
	I0813 20:26:14.557723   78216 cli_runner.go:115] Run: docker exec multinode-20210813202501-13784-m02 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:26:14.683377   78216 oci.go:278] the created container "multinode-20210813202501-13784-m02" has a running status.
	I0813 20:26:14.683420   78216 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa...
	I0813 20:26:14.788882   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0813 20:26:14.788944   78216 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:26:15.152216   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784-m02 --format={{.State.Status}}
	I0813 20:26:15.196187   78216 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:26:15.196229   78216 kic_runner.go:115] Args: [docker exec --privileged multinode-20210813202501-13784-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:26:17.347915   78216 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210813202501-13784-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.482910321s)
	I0813 20:26:17.347944   78216 kic.go:188] duration metric: took 3.483052 seconds to extract preloaded images to volume
	I0813 20:26:17.348012   78216 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784-m02 --format={{.State.Status}}
	I0813 20:26:17.386496   78216 machine.go:88] provisioning docker machine ...
	I0813 20:26:17.386580   78216 ubuntu.go:169] provisioning hostname "multinode-20210813202501-13784-m02"
	I0813 20:26:17.386649   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:26:17.424304   78216 main.go:130] libmachine: Using SSH client type: native
	I0813 20:26:17.424500   78216 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0813 20:26:17.424521   78216 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813202501-13784-m02 && echo "multinode-20210813202501-13784-m02" | sudo tee /etc/hostname
	I0813 20:26:17.557208   78216 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813202501-13784-m02
	
	I0813 20:26:17.557290   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:26:17.594444   78216 main.go:130] libmachine: Using SSH client type: native
	I0813 20:26:17.594619   78216 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0813 20:26:17.594695   78216 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813202501-13784-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813202501-13784-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813202501-13784-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:26:17.717103   78216 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:26:17.717132   78216 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:26:17.717151   78216 ubuntu.go:177] setting up certificates
	I0813 20:26:17.717161   78216 provision.go:83] configureAuth start
	I0813 20:26:17.717212   78216 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202501-13784-m02
	I0813 20:26:17.756206   78216 provision.go:138] copyHostCerts
	I0813 20:26:17.756242   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:26:17.756267   78216 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:26:17.756279   78216 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:26:17.756342   78216 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:26:17.756405   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:26:17.756430   78216 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:26:17.756437   78216 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:26:17.756463   78216 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:26:17.756503   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:26:17.756519   78216 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:26:17.756526   78216 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:26:17.756544   78216 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:26:17.756585   78216 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813202501-13784-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210813202501-13784-m02]
	I0813 20:26:17.869060   78216 provision.go:172] copyRemoteCerts
	I0813 20:26:17.869122   78216 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:26:17.869165   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:26:17.906966   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa Username:docker}
	I0813 20:26:17.996432   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 20:26:17.996491   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:26:18.012469   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 20:26:18.012513   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 20:26:18.028180   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 20:26:18.028228   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:26:18.043473   78216 provision.go:86] duration metric: configureAuth took 326.302537ms
	I0813 20:26:18.043493   78216 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:26:18.043625   78216 config.go:177] Loaded profile config "multinode-20210813202501-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:26:18.043732   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:26:18.081216   78216 main.go:130] libmachine: Using SSH client type: native
	I0813 20:26:18.081367   78216 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0813 20:26:18.081385   78216 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:26:18.444705   78216 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:26:18.444735   78216 machine.go:91] provisioned docker machine in 1.058212989s
	I0813 20:26:18.444744   78216 client.go:171] LocalClient.Create took 5.382143114s
	I0813 20:26:18.444757   78216 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813202501-13784" took 5.382189733s
	I0813 20:26:18.444764   78216 start.go:267] post-start starting for "multinode-20210813202501-13784-m02" (driver="docker")
	I0813 20:26:18.444769   78216 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:26:18.444826   78216 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:26:18.444861   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:26:18.482635   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa Username:docker}
	I0813 20:26:18.572786   78216 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:26:18.575364   78216 command_runner.go:124] > NAME="Ubuntu"
	I0813 20:26:18.575392   78216 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0813 20:26:18.575399   78216 command_runner.go:124] > ID=ubuntu
	I0813 20:26:18.575407   78216 command_runner.go:124] > ID_LIKE=debian
	I0813 20:26:18.575412   78216 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0813 20:26:18.575417   78216 command_runner.go:124] > VERSION_ID="20.04"
	I0813 20:26:18.575423   78216 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0813 20:26:18.575463   78216 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0813 20:26:18.575469   78216 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0813 20:26:18.575480   78216 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0813 20:26:18.575487   78216 command_runner.go:124] > VERSION_CODENAME=focal
	I0813 20:26:18.575491   78216 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0813 20:26:18.575599   78216 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:26:18.575614   78216 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:26:18.575622   78216 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:26:18.575630   78216 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:26:18.575642   78216 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:26:18.575691   78216 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:26:18.575774   78216 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:26:18.575785   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> /etc/ssl/certs/137842.pem
	I0813 20:26:18.575887   78216 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:26:18.582172   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:26:18.597891   78216 start.go:270] post-start completed in 153.115249ms
	I0813 20:26:18.598197   78216 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202501-13784-m02
	I0813 20:26:18.635767   78216 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/config.json ...
	I0813 20:26:18.636012   78216 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:26:18.636057   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:26:18.674510   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa Username:docker}
	I0813 20:26:18.761405   78216 command_runner.go:124] > 34%!
	(MISSING)I0813 20:26:18.761609   78216 start.go:129] duration metric: createHost completed in 5.701416942s
	I0813 20:26:18.761634   78216 start.go:80] releasing machines lock for "multinode-20210813202501-13784-m02", held for 5.701543273s
	I0813 20:26:18.761738   78216 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202501-13784-m02
	I0813 20:26:18.801662   78216 out.go:177] * Found network options:
	I0813 20:26:18.803127   78216 out.go:177]   - NO_PROXY=192.168.49.2
	W0813 20:26:18.803179   78216 proxy.go:118] fail to check proxy env: Error ip not in block
	W0813 20:26:18.803209   78216 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 20:26:18.803282   78216 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:26:18.803325   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:26:18.803357   78216 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:26:18.803421   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:26:18.846369   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa Username:docker}
	I0813 20:26:18.847248   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa Username:docker}
	I0813 20:26:19.124789   78216 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 20:26:19.124813   78216 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 20:26:19.124820   78216 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 20:26:19.124825   78216 command_runner.go:124] > The document has moved
	I0813 20:26:19.124831   78216 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 20:26:19.124835   78216 command_runner.go:124] > </BODY></HTML>
	I0813 20:26:19.124896   78216 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:26:19.134011   78216 docker.go:153] disabling docker service ...
	I0813 20:26:19.134061   78216 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:26:19.143187   78216 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:26:19.151373   78216 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:26:19.214458   78216 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 20:26:19.214529   78216 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:26:19.278558   78216 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 20:26:19.278624   78216 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:26:19.287345   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:26:19.299120   78216 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:26:19.299142   78216 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 20:26:19.299175   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:26:19.306427   78216 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:26:19.306470   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:26:19.314033   78216 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:26:19.319401   78216 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:26:19.319938   78216 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:26:19.319991   78216 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:26:19.326374   78216 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:26:19.332031   78216 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:26:19.388431   78216 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:26:19.396950   78216 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:26:19.397010   78216 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:26:19.399830   78216 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 20:26:19.399851   78216 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 20:26:19.399861   78216 command_runner.go:124] > Device: afh/175d	Inode: 721345      Links: 1
	I0813 20:26:19.399876   78216 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:26:19.399887   78216 command_runner.go:124] > Access: 2021-08-13 20:26:18.432420634 +0000
	I0813 20:26:19.399899   78216 command_runner.go:124] > Modify: 2021-08-13 20:26:18.432420634 +0000
	I0813 20:26:19.399908   78216 command_runner.go:124] > Change: 2021-08-13 20:26:18.432420634 +0000
	I0813 20:26:19.399912   78216 command_runner.go:124] >  Birth: -
	I0813 20:26:19.399949   78216 start.go:413] Will wait 60s for crictl version
	I0813 20:26:19.399992   78216 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:26:19.425977   78216 command_runner.go:124] > Version:  0.1.0
	I0813 20:26:19.425995   78216 command_runner.go:124] > RuntimeName:  cri-o
	I0813 20:26:19.426000   78216 command_runner.go:124] > RuntimeVersion:  1.20.3
	I0813 20:26:19.426006   78216 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 20:26:19.426021   78216 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:26:19.426077   78216 ssh_runner.go:149] Run: crio --version
	I0813 20:26:19.481798   78216 command_runner.go:124] > crio version 1.20.3
	I0813 20:26:19.481819   78216 command_runner.go:124] > Version:       1.20.3
	I0813 20:26:19.481826   78216 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0813 20:26:19.481831   78216 command_runner.go:124] > GitTreeState:  clean
	I0813 20:26:19.481837   78216 command_runner.go:124] > BuildDate:     2021-07-14T23:38:00Z
	I0813 20:26:19.481842   78216 command_runner.go:124] > GoVersion:     go1.15.2
	I0813 20:26:19.481846   78216 command_runner.go:124] > Compiler:      gc
	I0813 20:26:19.481852   78216 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:26:19.481857   78216 command_runner.go:124] > Linkmode:      dynamic
	I0813 20:26:19.483010   78216 command_runner.go:124] ! time="2021-08-13T20:26:19Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 20:26:19.483085   78216 ssh_runner.go:149] Run: crio --version
	I0813 20:26:19.541026   78216 command_runner.go:124] > crio version 1.20.3
	I0813 20:26:19.541051   78216 command_runner.go:124] > Version:       1.20.3
	I0813 20:26:19.541062   78216 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0813 20:26:19.541068   78216 command_runner.go:124] > GitTreeState:  clean
	I0813 20:26:19.541078   78216 command_runner.go:124] > BuildDate:     2021-07-14T23:38:00Z
	I0813 20:26:19.541084   78216 command_runner.go:124] > GoVersion:     go1.15.2
	I0813 20:26:19.541091   78216 command_runner.go:124] > Compiler:      gc
	I0813 20:26:19.541101   78216 command_runner.go:124] > Platform:      linux/amd64
	I0813 20:26:19.541110   78216 command_runner.go:124] > Linkmode:      dynamic
	I0813 20:26:19.542203   78216 command_runner.go:124] ! time="2021-08-13T20:26:19Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 20:26:19.544388   78216 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:26:19.545853   78216 out.go:177]   - env NO_PROXY=192.168.49.2
	I0813 20:26:19.545929   78216 cli_runner.go:115] Run: docker network inspect multinode-20210813202501-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:26:19.582294   78216 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:26:19.585509   78216 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:26:19.594018   78216 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784 for IP: 192.168.49.3
	I0813 20:26:19.594067   78216 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:26:19.594089   78216 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:26:19.594104   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 20:26:19.594123   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 20:26:19.594139   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 20:26:19.594156   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 20:26:19.594212   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:26:19.594266   78216 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:26:19.594278   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:26:19.594301   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:26:19.594329   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:26:19.594350   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:26:19.594395   78216 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:26:19.594421   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> /usr/share/ca-certificates/137842.pem
	I0813 20:26:19.594437   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:19.594448   78216 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem -> /usr/share/ca-certificates/13784.pem
	I0813 20:26:19.594782   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:26:19.610269   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:26:19.625681   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:26:19.641540   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:26:19.657067   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:26:19.672241   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:26:19.687375   78216 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:26:19.702651   78216 ssh_runner.go:149] Run: openssl version
	I0813 20:26:19.707052   78216 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0813 20:26:19.707158   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:26:19.715290   78216 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:26:19.718124   78216 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:26:19.718180   78216 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:26:19.718217   78216 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:26:19.722626   78216 command_runner.go:124] > 51391683
	I0813 20:26:19.722788   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:26:19.729434   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:26:19.736351   78216 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:26:19.739103   78216 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:26:19.739140   78216 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:26:19.739176   78216 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:26:19.743459   78216 command_runner.go:124] > 3ec20f2e
	I0813 20:26:19.743609   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:26:19.750382   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:26:19.756993   78216 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:19.759735   78216 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:19.759763   78216 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:19.759799   78216 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:26:19.764075   78216 command_runner.go:124] > b5213941
	I0813 20:26:19.764269   78216 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:26:19.771117   78216 ssh_runner.go:149] Run: crio config
	I0813 20:26:19.833614   78216 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 20:26:19.833646   78216 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 20:26:19.833656   78216 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 20:26:19.833661   78216 command_runner.go:124] > #
	I0813 20:26:19.833677   78216 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 20:26:19.833691   78216 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 20:26:19.833703   78216 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 20:26:19.833716   78216 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 20:26:19.833726   78216 command_runner.go:124] > # reload'.
	I0813 20:26:19.833736   78216 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 20:26:19.833749   78216 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 20:26:19.833762   78216 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 20:26:19.833775   78216 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 20:26:19.833783   78216 command_runner.go:124] > [crio]
	I0813 20:26:19.833793   78216 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 20:26:19.833805   78216 command_runner.go:124] > # containers images, in this directory.
	I0813 20:26:19.833816   78216 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 20:26:19.833834   78216 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 20:26:19.833845   78216 command_runner.go:124] > #runroot = "/run/containers/storage"
	I0813 20:26:19.833854   78216 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 20:26:19.833867   78216 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 20:26:19.833875   78216 command_runner.go:124] > #storage_driver = "overlay"
	I0813 20:26:19.833884   78216 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 20:26:19.833893   78216 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 20:26:19.833900   78216 command_runner.go:124] > #storage_option = [
	I0813 20:26:19.833907   78216 command_runner.go:124] > #	"overlay.mountopt=nodev",
	I0813 20:26:19.833913   78216 command_runner.go:124] > #]
	I0813 20:26:19.833923   78216 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 20:26:19.833932   78216 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 20:26:19.833941   78216 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 20:26:19.833951   78216 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 20:26:19.833963   78216 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 20:26:19.833976   78216 command_runner.go:124] > # always happen on a node reboot
	I0813 20:26:19.833984   78216 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 20:26:19.833993   78216 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 20:26:19.834006   78216 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 20:26:19.834013   78216 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 20:26:19.834025   78216 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 20:26:19.834034   78216 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 20:26:19.834040   78216 command_runner.go:124] > [crio.api]
	I0813 20:26:19.834048   78216 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 20:26:19.834055   78216 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 20:26:19.834063   78216 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 20:26:19.834102   78216 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 20:26:19.834114   78216 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 20:26:19.834122   78216 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 20:26:19.834129   78216 command_runner.go:124] > stream_port = "0"
	I0813 20:26:19.834138   78216 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 20:26:19.834147   78216 command_runner.go:124] > stream_enable_tls = false
	I0813 20:26:19.834156   78216 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 20:26:19.834163   78216 command_runner.go:124] > stream_idle_timeout = ""
	I0813 20:26:19.834173   78216 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 20:26:19.834183   78216 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 20:26:19.834188   78216 command_runner.go:124] > # minutes.
	I0813 20:26:19.834195   78216 command_runner.go:124] > stream_tls_cert = ""
	I0813 20:26:19.834205   78216 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 20:26:19.834215   78216 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 20:26:19.834222   78216 command_runner.go:124] > stream_tls_key = ""
	I0813 20:26:19.834229   78216 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 20:26:19.834236   78216 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 20:26:19.834242   78216 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 20:26:19.834246   78216 command_runner.go:124] > stream_tls_ca = ""
	I0813 20:26:19.834254   78216 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:26:19.834259   78216 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 20:26:19.834267   78216 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 20:26:19.834271   78216 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 20:26:19.834277   78216 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 20:26:19.834283   78216 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 20:26:19.834286   78216 command_runner.go:124] > [crio.runtime]
	I0813 20:26:19.834292   78216 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 20:26:19.834298   78216 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 20:26:19.834304   78216 command_runner.go:124] > # "nofile=1024:2048"
	I0813 20:26:19.834311   78216 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 20:26:19.834316   78216 command_runner.go:124] > #default_ulimits = [
	I0813 20:26:19.834319   78216 command_runner.go:124] > #]
	I0813 20:26:19.834332   78216 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 20:26:19.834336   78216 command_runner.go:124] > no_pivot = false
	I0813 20:26:19.834342   78216 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 20:26:19.834363   78216 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 20:26:19.834368   78216 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 20:26:19.834374   78216 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 20:26:19.834379   78216 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 20:26:19.834382   78216 command_runner.go:124] > conmon = ""
	I0813 20:26:19.834386   78216 command_runner.go:124] > # Cgroup setting for conmon
	I0813 20:26:19.834390   78216 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 20:26:19.834396   78216 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 20:26:19.834402   78216 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 20:26:19.834405   78216 command_runner.go:124] > conmon_env = [
	I0813 20:26:19.834411   78216 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 20:26:19.834413   78216 command_runner.go:124] > ]
	I0813 20:26:19.834419   78216 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 20:26:19.834423   78216 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 20:26:19.834429   78216 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 20:26:19.834432   78216 command_runner.go:124] > default_env = [
	I0813 20:26:19.834435   78216 command_runner.go:124] > ]
	I0813 20:26:19.834441   78216 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 20:26:19.834444   78216 command_runner.go:124] > selinux = false
	I0813 20:26:19.834451   78216 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 20:26:19.834457   78216 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 20:26:19.834462   78216 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 20:26:19.834466   78216 command_runner.go:124] > seccomp_profile = ""
	I0813 20:26:19.834471   78216 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 20:26:19.834477   78216 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 20:26:19.834483   78216 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 20:26:19.834487   78216 command_runner.go:124] > # which might increase security.
	I0813 20:26:19.834491   78216 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 20:26:19.834497   78216 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 20:26:19.834503   78216 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 20:26:19.834509   78216 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 20:26:19.834517   78216 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 20:26:19.834523   78216 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:19.834527   78216 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 20:26:19.834534   78216 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 20:26:19.834537   78216 command_runner.go:124] > # irqbalance daemon.
	I0813 20:26:19.834542   78216 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 20:26:19.834547   78216 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 20:26:19.834551   78216 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 20:26:19.834557   78216 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 20:26:19.834561   78216 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 20:26:19.834567   78216 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 20:26:19.834573   78216 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 20:26:19.834577   78216 command_runner.go:124] > # will be added.
	I0813 20:26:19.834580   78216 command_runner.go:124] > default_capabilities = [
	I0813 20:26:19.834584   78216 command_runner.go:124] > 	"CHOWN",
	I0813 20:26:19.834587   78216 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 20:26:19.834590   78216 command_runner.go:124] > 	"FSETID",
	I0813 20:26:19.834593   78216 command_runner.go:124] > 	"FOWNER",
	I0813 20:26:19.834597   78216 command_runner.go:124] > 	"SETGID",
	I0813 20:26:19.834600   78216 command_runner.go:124] > 	"SETUID",
	I0813 20:26:19.834603   78216 command_runner.go:124] > 	"SETPCAP",
	I0813 20:26:19.834636   78216 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 20:26:19.834644   78216 command_runner.go:124] > 	"KILL",
	I0813 20:26:19.834650   78216 command_runner.go:124] > ]
	I0813 20:26:19.834663   78216 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 20:26:19.834676   78216 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:26:19.834683   78216 command_runner.go:124] > default_sysctls = [
	I0813 20:26:19.834687   78216 command_runner.go:124] > ]
	I0813 20:26:19.834698   78216 command_runner.go:124] > # List of additional devices. specified as
	I0813 20:26:19.834714   78216 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 20:26:19.834723   78216 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 20:26:19.834730   78216 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 20:26:19.834736   78216 command_runner.go:124] > additional_devices = [
	I0813 20:26:19.834740   78216 command_runner.go:124] > ]
	I0813 20:26:19.834746   78216 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 20:26:19.834755   78216 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 20:26:19.834759   78216 command_runner.go:124] > hooks_dir = [
	I0813 20:26:19.834763   78216 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 20:26:19.834769   78216 command_runner.go:124] > ]
	I0813 20:26:19.834777   78216 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 20:26:19.834787   78216 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 20:26:19.834792   78216 command_runner.go:124] > # its default mounts from the following two files:
	I0813 20:26:19.834795   78216 command_runner.go:124] > #
	I0813 20:26:19.834801   78216 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 20:26:19.834812   78216 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 20:26:19.834818   78216 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 20:26:19.834821   78216 command_runner.go:124] > #
	I0813 20:26:19.834830   78216 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 20:26:19.834836   78216 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 20:26:19.834845   78216 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 20:26:19.834850   78216 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 20:26:19.834856   78216 command_runner.go:124] > #
	I0813 20:26:19.834860   78216 command_runner.go:124] > #default_mounts_file = ""
	I0813 20:26:19.834868   78216 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 20:26:19.834877   78216 command_runner.go:124] > pids_limit = 1024
	I0813 20:26:19.834883   78216 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 20:26:19.834892   78216 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 20:26:19.834899   78216 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 20:26:19.834905   78216 command_runner.go:124] > # limit is never exceeded.
	I0813 20:26:19.834909   78216 command_runner.go:124] > log_size_max = -1
	I0813 20:26:19.834932   78216 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 20:26:19.834938   78216 command_runner.go:124] > log_to_journald = false
	I0813 20:26:19.834945   78216 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 20:26:19.834952   78216 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 20:26:19.834957   78216 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 20:26:19.834966   78216 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 20:26:19.834975   78216 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 20:26:19.834979   78216 command_runner.go:124] > bind_mount_prefix = ""
	I0813 20:26:19.834985   78216 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 20:26:19.834991   78216 command_runner.go:124] > read_only = false
	I0813 20:26:19.834998   78216 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 20:26:19.835007   78216 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 20:26:19.835011   78216 command_runner.go:124] > # live configuration reload.
	I0813 20:26:19.835015   78216 command_runner.go:124] > log_level = "info"
	I0813 20:26:19.835020   78216 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 20:26:19.835028   78216 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:19.835031   78216 command_runner.go:124] > log_filter = ""
	I0813 20:26:19.835038   78216 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 20:26:19.835049   78216 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 20:26:19.835055   78216 command_runner.go:124] > # separated by comma.
	I0813 20:26:19.835058   78216 command_runner.go:124] > uid_mappings = ""
	I0813 20:26:19.835067   78216 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 20:26:19.835075   78216 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 20:26:19.835080   78216 command_runner.go:124] > # separated by comma.
	I0813 20:26:19.835083   78216 command_runner.go:124] > gid_mappings = ""
	I0813 20:26:19.835090   78216 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 20:26:19.835098   78216 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 20:26:19.835104   78216 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 20:26:19.835111   78216 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 20:26:19.835117   78216 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 20:26:19.835123   78216 command_runner.go:124] > # and manage their lifecycle.
	I0813 20:26:19.835130   78216 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 20:26:19.835136   78216 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 20:26:19.835142   78216 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 20:26:19.835151   78216 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 20:26:19.835159   78216 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 20:26:19.835164   78216 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 20:26:19.835170   78216 command_runner.go:124] > drop_infra_ctr = false
	I0813 20:26:19.835177   78216 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 20:26:19.835184   78216 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 20:26:19.835192   78216 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 20:26:19.835200   78216 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 20:26:19.835208   78216 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 20:26:19.835215   78216 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 20:26:19.835240   78216 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 20:26:19.835255   78216 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 20:26:19.835261   78216 command_runner.go:124] > pinns_path = ""
	I0813 20:26:19.835274   78216 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 20:26:19.835285   78216 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 20:26:19.835299   78216 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 20:26:19.835306   78216 command_runner.go:124] > default_runtime = "runc"
	I0813 20:26:19.835313   78216 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 20:26:19.835322   78216 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 20:26:19.835335   78216 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 20:26:19.835342   78216 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 20:26:19.835347   78216 command_runner.go:124] > #
	I0813 20:26:19.835352   78216 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 20:26:19.835361   78216 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 20:26:19.835365   78216 command_runner.go:124] > #  runtime_type = "oci"
	I0813 20:26:19.835370   78216 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 20:26:19.835375   78216 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 20:26:19.835382   78216 command_runner.go:124] > #  allowed_annotations = []
	I0813 20:26:19.835385   78216 command_runner.go:124] > # Where:
	I0813 20:26:19.835391   78216 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 20:26:19.835398   78216 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 20:26:19.835407   78216 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 20:26:19.835417   78216 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 20:26:19.835425   78216 command_runner.go:124] > #   in $PATH.
	I0813 20:26:19.835432   78216 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 20:26:19.835439   78216 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 20:26:19.835446   78216 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 20:26:19.835449   78216 command_runner.go:124] > #   state.
	I0813 20:26:19.835456   78216 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 20:26:19.835464   78216 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 20:26:19.835471   78216 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 20:26:19.835480   78216 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 20:26:19.835488   78216 command_runner.go:124] > #   The currently recognized values are:
	I0813 20:26:19.835495   78216 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 20:26:19.835504   78216 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 20:26:19.835512   78216 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 20:26:19.835519   78216 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 20:26:19.835524   78216 command_runner.go:124] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0813 20:26:19.835530   78216 command_runner.go:124] > runtime_type = "oci"
	I0813 20:26:19.835535   78216 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 20:26:19.835543   78216 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 20:26:19.835548   78216 command_runner.go:124] > # running containers
	I0813 20:26:19.835552   78216 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 20:26:19.835559   78216 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 20:26:19.835569   78216 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 20:26:19.835577   78216 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 20:26:19.835585   78216 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 20:26:19.835589   78216 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 20:26:19.835597   78216 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 20:26:19.835603   78216 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 20:26:19.835613   78216 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 20:26:19.835624   78216 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 20:26:19.835640   78216 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 20:26:19.835649   78216 command_runner.go:124] > #
	I0813 20:26:19.835656   78216 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 20:26:19.835664   78216 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 20:26:19.835671   78216 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 20:26:19.835680   78216 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 20:26:19.835686   78216 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 20:26:19.835692   78216 command_runner.go:124] > [crio.image]
	I0813 20:26:19.835698   78216 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 20:26:19.835705   78216 command_runner.go:124] > default_transport = "docker://"
	I0813 20:26:19.835712   78216 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 20:26:19.835720   78216 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:26:19.835724   78216 command_runner.go:124] > global_auth_file = ""
	I0813 20:26:19.835730   78216 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 20:26:19.835740   78216 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:19.835750   78216 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 20:26:19.835760   78216 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 20:26:19.835772   78216 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 20:26:19.835783   78216 command_runner.go:124] > # This option supports live configuration reload.
	I0813 20:26:19.835792   78216 command_runner.go:124] > pause_image_auth_file = ""
	I0813 20:26:19.835803   78216 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 20:26:19.835812   78216 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 20:26:19.835822   78216 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 20:26:19.835832   78216 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 20:26:19.835839   78216 command_runner.go:124] > pause_command = "/pause"
	I0813 20:26:19.835845   78216 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 20:26:19.835854   78216 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 20:26:19.835865   78216 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 20:26:19.835874   78216 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 20:26:19.835880   78216 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 20:26:19.835886   78216 command_runner.go:124] > signature_policy = ""
	I0813 20:26:19.835893   78216 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 20:26:19.835902   78216 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 20:26:19.835908   78216 command_runner.go:124] > # changing them here.
	I0813 20:26:19.835913   78216 command_runner.go:124] > #insecure_registries = "[]"
	I0813 20:26:19.835921   78216 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 20:26:19.835929   78216 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 20:26:19.835936   78216 command_runner.go:124] > image_volumes = "mkdir"
	I0813 20:26:19.835942   78216 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 20:26:19.835952   78216 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 20:26:19.835961   78216 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 20:26:19.835969   78216 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 20:26:19.835974   78216 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 20:26:19.835979   78216 command_runner.go:124] > #registries = [
	I0813 20:26:19.835983   78216 command_runner.go:124] > # ]
	I0813 20:26:19.835990   78216 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 20:26:19.835997   78216 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 20:26:19.836004   78216 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 20:26:19.836010   78216 command_runner.go:124] > # CNI plugins.
	I0813 20:26:19.836014   78216 command_runner.go:124] > [crio.network]
	I0813 20:26:19.836022   78216 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 20:26:19.836027   78216 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 20:26:19.836035   78216 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 20:26:19.836045   78216 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 20:26:19.836051   78216 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 20:26:19.836057   78216 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 20:26:19.836062   78216 command_runner.go:124] > plugin_dirs = [
	I0813 20:26:19.836068   78216 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 20:26:19.836071   78216 command_runner.go:124] > ]
	I0813 20:26:19.836088   78216 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 20:26:19.836094   78216 command_runner.go:124] > [crio.metrics]
	I0813 20:26:19.836099   78216 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 20:26:19.836105   78216 command_runner.go:124] > enable_metrics = false
	I0813 20:26:19.836111   78216 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 20:26:19.836118   78216 command_runner.go:124] > metrics_port = 9090
	I0813 20:26:19.836140   78216 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 20:26:19.836146   78216 command_runner.go:124] > metrics_socket = ""
	I0813 20:26:19.836189   78216 command_runner.go:124] ! time="2021-08-13T20:26:19Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 20:26:19.836203   78216 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 20:26:19.836260   78216 cni.go:93] Creating CNI manager for ""
	I0813 20:26:19.836269   78216 cni.go:154] 2 nodes found, recommending kindnet
	I0813 20:26:19.836281   78216 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:26:19.836295   78216 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813202501-13784 NodeName:multinode-20210813202501-13784-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:systemd ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:26:19.836416   78216 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813202501-13784-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:26:19.836482   78216 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-20210813202501-13784-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202501-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:26:19.836529   78216 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:26:19.842409   78216 command_runner.go:124] > kubeadm
	I0813 20:26:19.842430   78216 command_runner.go:124] > kubectl
	I0813 20:26:19.842435   78216 command_runner.go:124] > kubelet
	I0813 20:26:19.842968   78216 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:26:19.843013   78216 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0813 20:26:19.849216   78216 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (565 bytes)
	I0813 20:26:19.860615   78216 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:26:19.871920   78216 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:26:19.874653   78216 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:26:19.883001   78216 host.go:66] Checking if "multinode-20210813202501-13784" exists ...
	I0813 20:26:19.883201   78216 config.go:177] Loaded profile config "multinode-20210813202501-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:26:19.883253   78216 start.go:241] JoinCluster: &{Name:multinode-20210813202501-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813202501-13784 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 20:26:19.883334   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0813 20:26:19.883381   78216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:26:19.920990   78216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:26:20.065315   78216 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token iw6b04.4dm21rt6drg29sf5 --discovery-token-ca-cert-hash sha256:c4abb71b090fb6a33c758a3743cc840f782cf9be45db9979473fed7ebf39bccf 
	I0813 20:26:20.068194   78216 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 20:26:20.068231   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token iw6b04.4dm21rt6drg29sf5 --discovery-token-ca-cert-hash sha256:c4abb71b090fb6a33c758a3743cc840f782cf9be45db9979473fed7ebf39bccf --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813202501-13784-m02"
	I0813 20:26:20.206885   78216 command_runner.go:124] ! 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	I0813 20:26:20.210019   78216 command_runner.go:124] ! 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0813 20:26:20.210051   78216 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
	I0813 20:26:20.278252   78216 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 20:26:26.404169   78216 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 20:26:26.404204   78216 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0813 20:26:26.404217   78216 command_runner.go:124] > KERNEL_VERSION: 4.9.0-16-amd64
	I0813 20:26:26.404223   78216 command_runner.go:124] > OS: Linux
	I0813 20:26:26.404228   78216 command_runner.go:124] > CGROUPS_CPU: enabled
	I0813 20:26:26.404235   78216 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0813 20:26:26.404240   78216 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0813 20:26:26.404248   78216 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0813 20:26:26.404257   78216 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0813 20:26:26.404266   78216 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0813 20:26:26.404275   78216 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0813 20:26:26.404283   78216 command_runner.go:124] > CGROUPS_HUGETLB: missing
	I0813 20:26:26.404291   78216 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0813 20:26:26.404302   78216 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0813 20:26:26.404313   78216 command_runner.go:124] > [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
	I0813 20:26:26.404322   78216 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 20:26:26.404333   78216 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 20:26:26.404349   78216 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 20:26:26.404359   78216 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0813 20:26:26.404368   78216 command_runner.go:124] > This node has joined the cluster:
	I0813 20:26:26.404379   78216 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0813 20:26:26.404390   78216 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0813 20:26:26.404400   78216 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0813 20:26:26.404421   78216 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token iw6b04.4dm21rt6drg29sf5 --discovery-token-ca-cert-hash sha256:c4abb71b090fb6a33c758a3743cc840f782cf9be45db9979473fed7ebf39bccf --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813202501-13784-m02": (6.336178268s)
	I0813 20:26:26.404445   78216 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0813 20:26:26.469018   78216 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0813 20:26:26.522804   78216 start.go:243] JoinCluster complete in 6.639545869s
	I0813 20:26:26.522835   78216 cni.go:93] Creating CNI manager for ""
	I0813 20:26:26.522843   78216 cni.go:154] 2 nodes found, recommending kindnet
	I0813 20:26:26.522894   78216 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:26:26.525993   78216 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 20:26:26.526020   78216 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0813 20:26:26.526030   78216 command_runner.go:124] > Device: 801h/2049d	Inode: 4333431     Links: 1
	I0813 20:26:26.526041   78216 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 20:26:26.526051   78216 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0813 20:26:26.526067   78216 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0813 20:26:26.526078   78216 command_runner.go:124] > Change: 2021-08-10 21:18:56.705166650 +0000
	I0813 20:26:26.526086   78216 command_runner.go:124] >  Birth: -
	I0813 20:26:26.526172   78216 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:26:26.526198   78216 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:26:26.537926   78216 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:26:26.700829   78216 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0813 20:26:26.702648   78216 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0813 20:26:26.704305   78216 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0813 20:26:26.762507   78216 command_runner.go:124] > daemonset.apps/kindnet configured
	I0813 20:26:26.765727   78216 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 20:26:26.767538   78216 out.go:177] * Verifying Kubernetes components...
	I0813 20:26:26.767594   78216 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:26:26.777370   78216 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:26:26.777644   78216 kapi.go:59] client config for multinode-20210813202501-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/multinode-20210813202501-
13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:26:26.778844   78216 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813202501-13784-m02" to be "Ready" ...
	I0813 20:26:26.778919   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:26.778928   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:26.778932   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:26.778936   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:26.780577   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:26.780598   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:26.780604   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:26.780609   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:26.780615   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:26.780620   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:26.780626   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:26 GMT
	I0813 20:26:26.780731   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"547","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5308 chars]
	I0813 20:26:27.281774   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:27.281806   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:27.281813   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:27.281819   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:27.283836   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:27.283857   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:27.283865   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:27.283869   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:27.283874   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:27.283879   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:27.283883   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:27 GMT
	I0813 20:26:27.283992   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"547","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5308 chars]
	I0813 20:26:27.781558   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:27.781582   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:27.781587   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:27.781596   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:27.783420   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:27.783440   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:27.783446   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:27.783451   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:27 GMT
	I0813 20:26:27.783455   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:27.783459   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:27.783464   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:27.783551   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"547","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5308 chars]
	I0813 20:26:28.281107   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:28.281131   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:28.281137   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:28.281141   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:28.283315   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:28.283335   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:28.283341   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:28.283349   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:28.283353   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:28.283358   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:28.283364   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:28 GMT
	I0813 20:26:28.283466   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"547","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5308 chars]
	I0813 20:26:28.781654   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:28.781676   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:28.781682   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:28.781686   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:28.783952   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:28.783974   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:28.783980   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:28 GMT
	I0813 20:26:28.783984   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:28.783989   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:28.783994   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:28.784000   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:28.784078   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"547","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5308 chars]
	I0813 20:26:28.784338   78216 node_ready.go:58] node "multinode-20210813202501-13784-m02" has status "Ready":"False"
	I0813 20:26:29.281654   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:29.281680   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:29.281688   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:29.281695   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:29.284058   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:29.284077   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:29.284083   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:29.284087   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:29.284090   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:29.284093   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:29 GMT
	I0813 20:26:29.284097   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:29.284191   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"547","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5308 chars]
	I0813 20:26:29.781792   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:29.781815   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:29.781821   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:29.781825   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:29.785649   78216 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:29.785689   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:29.785697   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:29.785701   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:29.785706   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:29.785710   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:29.785715   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:29 GMT
	I0813 20:26:29.785843   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:30.281317   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:30.281343   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:30.281349   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:30.281353   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:30.283436   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:30.283463   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:30.283471   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:30.283476   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:30 GMT
	I0813 20:26:30.283481   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:30.283490   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:30.283493   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:30.283601   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:30.781118   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:30.781144   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:30.781150   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:30.781155   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:30.782872   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:30.782920   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:30.782925   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:30.782928   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:30.782931   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:30.782934   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:30.782938   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:30 GMT
	I0813 20:26:30.783080   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:31.281547   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:31.281573   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:31.281579   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:31.281583   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:31.283639   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:31.283660   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:31.283667   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:31.283672   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:31.283677   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:31 GMT
	I0813 20:26:31.283681   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:31.283686   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:31.283766   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:31.284063   78216 node_ready.go:58] node "multinode-20210813202501-13784-m02" has status "Ready":"False"
	I0813 20:26:31.781683   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:31.781712   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:31.781717   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:31.781722   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:31.783462   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:31.783483   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:31.783490   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:31.783498   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:31.783502   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:31.783507   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:31.783512   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:31 GMT
	I0813 20:26:31.783612   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:32.281306   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:32.281332   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:32.281338   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.281343   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.283729   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:32.283746   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:32.283751   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.283755   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.283758   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:32.283760   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:32.283763   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.283861   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:32.781422   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:32.781448   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:32.781456   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:32.781462   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:32.783697   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:32.783719   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:32.783724   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:32.783727   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:32.783730   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:32 GMT
	I0813 20:26:32.783733   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:32.783736   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:32.783865   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:33.281436   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:33.281465   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:33.281474   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.281480   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.283620   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:33.283641   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:33.283647   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.283652   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.283656   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:33.283660   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:33.283664   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.283822   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:33.781456   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:33.781526   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:33.781555   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:33.781563   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:33.784440   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:33.784463   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:33.784468   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:33.784472   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:33.784476   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:33.784481   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:33.784485   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:33 GMT
	I0813 20:26:33.784582   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:33.784822   78216 node_ready.go:58] node "multinode-20210813202501-13784-m02" has status "Ready":"False"
	I0813 20:26:34.281134   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:34.281161   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:34.281168   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:34.281180   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:34.283237   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:34.283255   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:34.283260   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:34 GMT
	I0813 20:26:34.283264   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:34.283267   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:34.283272   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:34.283276   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:34.283466   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:34.782010   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:34.782033   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:34.782038   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:34.782042   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:34.784364   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:34.784385   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:34.784390   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:34.784394   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:34.784397   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:34.784401   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:34.784404   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:34 GMT
	I0813 20:26:34.784511   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:35.281206   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:35.281232   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:35.281237   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:35.281242   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:35.283483   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:35.283504   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:35.283510   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:35.283515   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:35.283519   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:35.283524   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:35.283529   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:35 GMT
	I0813 20:26:35.283632   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:35.781188   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:35.781212   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:35.781218   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:35.781222   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:35.784546   78216 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:35.784569   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:35.784577   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:35.784582   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:35.784586   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:35.784590   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:35.784595   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:35 GMT
	I0813 20:26:35.784695   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"557","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5417 chars]
	I0813 20:26:35.784974   78216 node_ready.go:58] node "multinode-20210813202501-13784-m02" has status "Ready":"False"
	I0813 20:26:36.281168   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:36.281195   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.281206   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.281215   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.283290   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:36.283310   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.283317   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.283320   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.283323   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.283327   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.283330   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.283428   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"574","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5682 chars]
	I0813 20:26:36.283659   78216 node_ready.go:49] node "multinode-20210813202501-13784-m02" has status "Ready":"True"
	I0813 20:26:36.283676   78216 node_ready.go:38] duration metric: took 9.504813367s waiting for node "multinode-20210813202501-13784-m02" to be "Ready" ...
	I0813 20:26:36.283685   78216 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:26:36.283735   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 20:26:36.283745   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.283750   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.283754   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.286424   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:36.286447   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.286455   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.286460   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.286464   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.286467   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.286470   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.286970   78216 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"504","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 68324 chars]
	I0813 20:26:36.288408   78216 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.288466   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-z5bmn
	I0813 20:26:36.288474   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.288479   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.288483   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.290017   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:36.290035   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.290041   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.290046   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.290051   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.290058   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.290063   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.290132   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-z5bmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"217d65d1-6fe4-48d9-954f-7246653dbdd4","resourceVersion":"504","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"ca9d29ff-786a-4475-a008-5cab6a027b34","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca9d29ff-786a-4475-a008-5cab6a027b34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5735 chars]
	I0813 20:26:36.290444   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:36.290458   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.290462   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.290466   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.292011   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:36.292025   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.292031   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.292036   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.292041   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.292046   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.292050   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.292147   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:36.292438   78216 pod_ready.go:92] pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:36.292461   78216 pod_ready.go:81] duration metric: took 4.034112ms waiting for pod "coredns-558bd4d5db-z5bmn" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.292472   78216 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.292528   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813202501-13784
	I0813 20:26:36.292539   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.292545   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.292552   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.294047   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:36.294064   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.294069   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.294075   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.294079   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.294084   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.294088   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.294164   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813202501-13784","namespace":"kube-system","uid":"019ebe07-83e4-44a1-a5c0-c1fd5f4d32bc","resourceVersion":"326","creationTimestamp":"2021-08-13T20:25:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"29b46eb226f31ece96d42b406a7c6fc4","kubernetes.io/config.mirror":"29b46eb226f31ece96d42b406a7c6fc4","kubernetes.io/config.seen":"2021-08-13T20:25:32.303988407Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash" [truncated 5559 chars]
	I0813 20:26:36.294421   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:36.294433   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.294438   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.294442   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.298094   78216 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 20:26:36.298110   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.298116   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.298121   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.298125   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.298132   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.298139   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.298234   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:36.298451   78216 pod_ready.go:92] pod "etcd-multinode-20210813202501-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:36.298462   78216 pod_ready.go:81] duration metric: took 5.983011ms waiting for pod "etcd-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.298474   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.298511   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813202501-13784
	I0813 20:26:36.298519   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.298523   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.298527   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.299997   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:36.300010   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.300016   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.300020   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.300025   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.300029   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.300033   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.300192   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813202501-13784","namespace":"kube-system","uid":"87fefdcc-c5d1-42d3-991a-ad28c4f7a669","resourceVersion":"355","creationTimestamp":"2021-08-13T20:25:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"a494d4f8a8d6a2671115f307173e8700","kubernetes.io/config.mirror":"a494d4f8a8d6a2671115f307173e8700","kubernetes.io/config.seen":"2021-08-13T20:25:32.303990597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address. [truncated 8088 chars]
	I0813 20:26:36.300481   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:36.300496   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.300503   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.300508   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.301987   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:36.302002   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.302008   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.302013   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.302017   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.302022   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.302027   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.302144   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:36.302386   78216 pod_ready.go:92] pod "kube-apiserver-multinode-20210813202501-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:36.302399   78216 pod_ready.go:81] duration metric: took 3.918616ms waiting for pod "kube-apiserver-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.302409   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.302459   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813202501-13784
	I0813 20:26:36.302471   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.302477   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.302483   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.303807   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:36.303822   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.303828   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.303832   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.303837   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.303841   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.303846   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.303936   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813202501-13784","namespace":"kube-system","uid":"03314bb8-93d9-4da5-b960-3580e4f5089a","resourceVersion":"289","creationTimestamp":"2021-08-13T20:25:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c8d2d20e82b9f18ce43c33bb9529b104","kubernetes.io/config.mirror":"c8d2d20e82b9f18ce43c33bb9529b104","kubernetes.io/config.seen":"2021-08-13T20:25:32.303992549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 7654 chars]
	I0813 20:26:36.304189   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:36.304199   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.304204   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.304208   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.305576   78216 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 20:26:36.305588   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.305592   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.305595   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.305600   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.305603   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.305606   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.305696   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:36.305894   78216 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813202501-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:36.305905   78216 pod_ready.go:81] duration metric: took 3.488322ms waiting for pod "kube-controller-manager-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.305913   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5qfxb" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.481250   78216 request.go:600] Waited for 175.286018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qfxb
	I0813 20:26:36.481338   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qfxb
	I0813 20:26:36.481360   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.481366   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.481371   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.483571   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:36.483602   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.483610   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.483615   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.483619   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.483623   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.483626   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.483716   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5qfxb","generateName":"kube-proxy-","namespace":"kube-system","uid":"098e1fdf-be73-4c00-af36-bb0432215045","resourceVersion":"476","creationTimestamp":"2021-08-13T20:25:45Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8e8d5a52-c8fa-4a19-9f85-fd1c335f478a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e8d5a52-c8fa-4a19-9f85-fd1c335f478a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5754 chars]
	I0813 20:26:36.682080   78216 request.go:600] Waited for 198.039477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:36.682154   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:36.682166   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.682172   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.682176   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.684456   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:36.684485   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.684493   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.684499   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.684504   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.684510   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.684521   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.684640   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:36.684982   78216 pod_ready.go:92] pod "kube-proxy-5qfxb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:36.685000   78216 pod_ready.go:81] duration metric: took 379.081055ms waiting for pod "kube-proxy-5qfxb" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.685013   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wjcn" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:36.881255   78216 request.go:600] Waited for 196.175353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6wjcn
	I0813 20:26:36.881313   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6wjcn
	I0813 20:26:36.881323   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:36.881329   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:36.881334   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:36.883697   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:36.883721   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:36.883729   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:36.883734   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:36.883739   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:36.883744   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:36.883748   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:36 GMT
	I0813 20:26:36.883907   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6wjcn","generateName":"kube-proxy-","namespace":"kube-system","uid":"32c35f26-825d-49a0-9802-09113ab5b37c","resourceVersion":"564","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8e8d5a52-c8fa-4a19-9f85-fd1c335f478a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e8d5a52-c8fa-4a19-9f85-fd1c335f478a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5762 chars]
	I0813 20:26:37.081711   78216 request.go:600] Waited for 197.384529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:37.081783   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784-m02
	I0813 20:26:37.081791   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:37.081797   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:37.081802   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:37.084001   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:37.084022   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:37.084029   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:37.084034   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:37.084040   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:37.084045   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:37.084049   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:37 GMT
	I0813 20:26:37.084136   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784-m02","uid":"bd93dd29-bff2-49b0-86dc-73ffd9fcfd22","resourceVersion":"574","creationTimestamp":"2021-08-13T20:26:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:26:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5682 chars]
	I0813 20:26:37.084374   78216 pod_ready.go:92] pod "kube-proxy-6wjcn" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:37.084385   78216 pod_ready.go:81] duration metric: took 399.362878ms waiting for pod "kube-proxy-6wjcn" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:37.084394   78216 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:37.281827   78216 request.go:600] Waited for 197.367849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202501-13784
	I0813 20:26:37.281921   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813202501-13784
	I0813 20:26:37.281933   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:37.281941   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:37.281947   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:37.284099   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:37.284119   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:37.284125   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:37.284130   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:37 GMT
	I0813 20:26:37.284139   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:37.284143   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:37.284148   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:37.284230   78216 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813202501-13784","namespace":"kube-system","uid":"89df0c7c-6465-4ac8-ae60-a2fdb61756f7","resourceVersion":"291","creationTimestamp":"2021-08-13T20:25:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7694bd83a6b91cca55d5de526505eb47","kubernetes.io/config.mirror":"7694bd83a6b91cca55d5de526505eb47","kubernetes.io/config.seen":"2021-08-13T20:25:17.563985654Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T20:25:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4536 chars]
	I0813 20:26:37.481904   78216 request.go:600] Waited for 197.369421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:37.481974   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813202501-13784
	I0813 20:26:37.481983   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:37.481991   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:37.481998   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:37.484281   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:37.484305   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:37.484312   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:37 GMT
	I0813 20:26:37.484315   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:37.484318   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:37.484324   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:37.484330   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:37.484488   78216 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T20 [truncated 6600 chars]
	I0813 20:26:37.484741   78216 pod_ready.go:92] pod "kube-scheduler-multinode-20210813202501-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:26:37.484755   78216 pod_ready.go:81] duration metric: took 400.354396ms waiting for pod "kube-scheduler-multinode-20210813202501-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:26:37.484766   78216 pod_ready.go:38] duration metric: took 1.201072448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:26:37.484798   78216 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:26:37.484843   78216 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:26:37.494475   78216 system_svc.go:56] duration metric: took 9.672011ms WaitForService to wait for kubelet.
	I0813 20:26:37.494495   78216 kubeadm.go:547] duration metric: took 10.728728894s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:26:37.494519   78216 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:26:37.681919   78216 request.go:600] Waited for 187.325076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0813 20:26:37.681998   78216 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0813 20:26:37.682036   78216 round_trippers.go:438] Request Headers:
	I0813 20:26:37.682042   78216 round_trippers.go:442]     Accept: application/json, */*
	I0813 20:26:37.682046   78216 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 20:26:37.684399   78216 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 20:26:37.684421   78216 round_trippers.go:460] Response Headers:
	I0813 20:26:37.684443   78216 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 20:26:37.684447   78216 round_trippers.go:463]     Content-Type: application/json
	I0813 20:26:37.684450   78216 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4c55111a-5028-4314-9509-a61179ff1d78
	I0813 20:26:37.684453   78216 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: f0f49396-178d-4e17-b254-3613c2cb6510
	I0813 20:26:37.684457   78216 round_trippers.go:463]     Date: Fri, 13 Aug 2021 20:26:37 GMT
	I0813 20:26:37.684595   78216 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"multinode-20210813202501-13784","uid":"b9539855-b8cb-451c-8dc3-492f525c47b5","resourceVersion":"404","creationTimestamp":"2021-08-13T20:25:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813202501-13784","kubernetes.io/os":"linux","minikube.k8s.io/commit":"852050cf77fe767e86d5a194bb91c06c4dc6c13c","minikube.k8s.io/name":"multinode-20210813202501-13784","minikube.k8s.io/updated_at":"2021_08_13T20_25_27_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 13327 chars]
	I0813 20:26:37.684976   78216 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:26:37.684993   78216 node_conditions.go:123] node cpu capacity is 8
	I0813 20:26:37.685004   78216 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:26:37.685008   78216 node_conditions.go:123] node cpu capacity is 8
	I0813 20:26:37.685012   78216 node_conditions.go:105] duration metric: took 190.488285ms to run NodePressure ...
	I0813 20:26:37.685025   78216 start.go:231] waiting for startup goroutines ...
	I0813 20:26:37.728442   78216 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:26:37.730877   78216 out.go:177] * Done! kubectl is now configured to use "multinode-20210813202501-13784" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:25:03 UTC, end at Fri 2021-08-13 20:26:53 UTC. --
	Aug 13 20:26:10 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:10.870100793Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61 k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:42585056,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8c05226f-3b57-4f2d-b105-a86ad2e6dcfe name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:26:10 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:10.870988055Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-z5bmn/coredns" id=30aff115-295c-476b-9cc4-f2a3dd143546 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:26:10 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:10.881685905Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5f5ae77cfe6b6d2b7d54f655f97f14d52ad6b2f2e674ba5d5fc09b4b3b8f74ee/merged/etc/passwd: no such file or directory"
	Aug 13 20:26:10 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:10.881725293Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5f5ae77cfe6b6d2b7d54f655f97f14d52ad6b2f2e674ba5d5fc09b4b3b8f74ee/merged/etc/group: no such file or directory"
	Aug 13 20:26:10 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:10.991390919Z" level=info msg="Created container 6e513fcb283e99cb7667f5d17f79cca238f3f528cca4bbace4e0d24aae160eb9: kube-system/coredns-558bd4d5db-z5bmn/coredns" id=30aff115-295c-476b-9cc4-f2a3dd143546 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:26:10 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:10.992003455Z" level=info msg="Starting container: 6e513fcb283e99cb7667f5d17f79cca238f3f528cca4bbace4e0d24aae160eb9" id=c12913fb-962a-4e93-96bc-c7c5d9285b5b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:26:11 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:11.002012900Z" level=info msg="Started container 6e513fcb283e99cb7667f5d17f79cca238f3f528cca4bbace4e0d24aae160eb9: kube-system/coredns-558bd4d5db-z5bmn/coredns" id=c12913fb-962a-4e93-96bc-c7c5d9285b5b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:26:38 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:38.975605296Z" level=info msg="Running pod sandbox: default/busybox-84b6686758-nhdx8/POD" id=445733b7-dce4-4b03-911b-b21b224a3eef name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:26:38 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:38.990383760Z" level=info msg="Got pod network &{Name:busybox-84b6686758-nhdx8 Namespace:default ID:8c2bf7c8a05d72926c8ebcc4f07a6de9fd429bba066e337c588311f0f57c2217 NetNS:/var/run/netns/9cfd4f6b-9175-44eb-b980-bebad60ec614 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 20:26:38 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:38.990412151Z" level=info msg="About to add CNI network kindnet (type=ptp)"
	Aug 13 20:26:39 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:39.042084525Z" level=info msg="Got pod network &{Name:busybox-84b6686758-nhdx8 Namespace:default ID:8c2bf7c8a05d72926c8ebcc4f07a6de9fd429bba066e337c588311f0f57c2217 NetNS:/var/run/netns/9cfd4f6b-9175-44eb-b980-bebad60ec614 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 20:26:39 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:39.042231849Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 13 20:26:39 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:39.158362106Z" level=info msg="Ran pod sandbox 8c2bf7c8a05d72926c8ebcc4f07a6de9fd429bba066e337c588311f0f57c2217 with infra container: default/busybox-84b6686758-nhdx8/POD" id=445733b7-dce4-4b03-911b-b21b224a3eef name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:26:39 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:39.159373703Z" level=info msg="Checking image status: busybox:1.28" id=99c3e68c-7231-4919-aba0-6bde297d31e4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:26:39 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:39.159778726Z" level=info msg="Image busybox:1.28 not found" id=99c3e68c-7231-4919-aba0-6bde297d31e4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:26:39 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:39.160442612Z" level=info msg="Pulling image: busybox:1.28" id=3eafccf4-3c43-4a91-a742-a2aee5890179 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:26:39 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:39.163400223Z" level=info msg="Trying to access \"docker.io/library/busybox:1.28\""
	Aug 13 20:26:41 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:41.160302763Z" level=info msg="Trying to access \"docker.io/library/busybox:1.28\""
	Aug 13 20:26:44 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:44.370903383Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47" id=3eafccf4-3c43-4a91-a742-a2aee5890179 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:26:44 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:44.371602184Z" level=info msg="Checking image status: busybox:1.28" id=29a90b73-9e60-43cb-be3b-49bd05b7b025 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:26:44 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:44.372266142Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[docker.io/library/busybox:1.28],RepoDigests:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335],Size_:1365634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=29a90b73-9e60-43cb-be3b-49bd05b7b025 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:26:44 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:44.373045661Z" level=info msg="Creating container: default/busybox-84b6686758-nhdx8/busybox" id=faae76d9-b5e1-4c22-8b1a-ab13ef82ca19 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:26:44 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:44.532349233Z" level=info msg="Created container 6a472185a064cdfe61853eea2f84ca64e4689438aa1612c8198b1d9721f00545: default/busybox-84b6686758-nhdx8/busybox" id=faae76d9-b5e1-4c22-8b1a-ab13ef82ca19 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:26:44 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:44.532872126Z" level=info msg="Starting container: 6a472185a064cdfe61853eea2f84ca64e4689438aa1612c8198b1d9721f00545" id=9501dd4d-1565-490d-830a-26904a934d6d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:26:44 multinode-20210813202501-13784 crio[367]: time="2021-08-13 20:26:44.542236503Z" level=info msg="Started container 6a472185a064cdfe61853eea2f84ca64e4689438aa1612c8198b1d9721f00545: default/busybox-84b6686758-nhdx8/busybox" id=9501dd4d-1565-490d-830a-26904a934d6d name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID
	6a472185a064c       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   9 seconds ago        Running             busybox                   0                   8c2bf7c8a05d7
	6e513fcb283e9       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    42 seconds ago       Running             coredns                   0                   7345f50419329
	5d1fa131ada61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    About a minute ago   Running             storage-provisioner       0                   8304eee742227
	b3999dc548dbc       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    About a minute ago   Running             kindnet-cni               0                   5fb1fdc6b191a
	7ba027d7f6880       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    About a minute ago   Running             kube-proxy                0                   67a39cd42ca82
	be40c3ac3120f       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    About a minute ago   Running             etcd                      0                   4cc4b64d3af1a
	61a5ae23d8ddb       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    About a minute ago   Running             kube-scheduler            0                   238479c02fc5e
	d03677368b4fc       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    About a minute ago   Running             kube-apiserver            0                   a5eecbb8f2341
	3154784c683c3       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    About a minute ago   Running             kube-controller-manager   0                   ea52089fb6e5e
	
	* 
	* ==> coredns [6e513fcb283e99cb7667f5d17f79cca238f3f528cca4bbace4e0d24aae160eb9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210813202501-13784
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813202501-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=multinode-20210813202501-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_25_27_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:25:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813202501-13784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:26:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:25:42 +0000   Fri, 13 Aug 2021 20:25:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:25:42 +0000   Fri, 13 Aug 2021 20:25:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:25:42 +0000   Fri, 13 Aug 2021 20:25:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:25:42 +0000   Fri, 13 Aug 2021 20:25:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20210813202501-13784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                1b02c6e1-107f-4ef0-964f-4e8b485e5f13
	  Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-nhdx8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 coredns-558bd4d5db-z5bmn                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     68s
	  kube-system                 etcd-multinode-20210813202501-13784                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         81s
	  kube-system                 kindnet-2k7g6                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      68s
	  kube-system                 kube-apiserver-multinode-20210813202501-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-multinode-20210813202501-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-5qfxb                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-multinode-20210813202501-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 81s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s   kubelet     Node multinode-20210813202501-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s   kubelet     Node multinode-20210813202501-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s   kubelet     Node multinode-20210813202501-13784 status is now: NodeHasSufficientPID
	  Normal  NodeReady                71s   kubelet     Node multinode-20210813202501-13784 status is now: NodeReady
	  Normal  Starting                 67s   kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210813202501-13784-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813202501-13784-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:26:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813202501-13784-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:26:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:26:36 +0000   Fri, 13 Aug 2021 20:26:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:26:36 +0000   Fri, 13 Aug 2021 20:26:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:26:36 +0000   Fri, 13 Aug 2021 20:26:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:26:36 +0000   Fri, 13 Aug 2021 20:26:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20210813202501-13784-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                9c79a255-4d59-4649-a02d-b88ebfc4b79c
	  Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-7gjcw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 kindnet-pbrwr               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      27s
	  kube-system                 kube-proxy-6wjcn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 28s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x2 over 27s)  kubelet     Node multinode-20210813202501-13784-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x2 over 27s)  kubelet     Node multinode-20210813202501-13784-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x2 over 27s)  kubelet     Node multinode-20210813202501-13784-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 24s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                17s                kubelet     Node multinode-20210813202501-13784-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.786165] overlayfs: unrecognized mount option "volatile" or missing value
	[Aug13 20:22] IPv4: martian source 10.244.0.10 from 10.244.0.10, on dev vethcab936e5
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 52 fe f4 17 8f 0c 08 06        ......R.......
	[ +13.522000] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:23] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 1f 6a 3e fc 59 08 06        ......~.j>.Y..
	[  +0.000004] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 7e 1f 6a 3e fc 59 08 06        ......~.j>.Y..
	[ +12.033996] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth20449a06
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 66 a4 f6 97 57 13 08 06        ......f...W...
	[ +28.606451] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:24] cgroup: cgroup2: unknown option "nsdelegate"
	[ +26.175005] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:25] cgroup: cgroup2: unknown option "nsdelegate"
	[ +52.701758] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 03 b4 c7 81 76 08 06        ......j....v..
	[  +0.000004] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 6a 03 b4 c7 81 76 08 06        ......j....v..
	[Aug13 20:26] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vetha6458cd2
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff f2 06 c9 60 65 eb 08 06        .........`e...
	[  +3.925584] cgroup: cgroup2: unknown option "nsdelegate"
	[ +24.380438] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethb9f7e2ad
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 2e 36 83 64 f2 bf 08 06        .......6.d....
	[  +0.068207] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth2204ab9c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 cd 3c 28 2f b4 08 06        ........<(/...
	
	* 
	* ==> etcd [be40c3ac3120fe281a30a2d80337ff709b1f4aa33b3ce033a95c3577b0154374] <==
	* 2021-08-13 20:25:39.085861 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-multinode-20210813202501-13784\" " with result "range_response_count:1 size:4276" took too long (1.821227194s) to execute
	2021-08-13 20:25:39.085891 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.498637033s) to execute
	2021-08-13 20:25:39.085978 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5913" took too long (1.818006455s) to execute
	2021-08-13 20:25:41.062505 W | wal: sync duration of 1.964486046s, expected less than 1s
	2021-08-13 20:25:41.062781 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (1.9634748s) to execute
	2021-08-13 20:25:41.503004 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.917018253s) to execute
	2021-08-13 20:25:41.503172 W | etcdserver: request "header:<ID:8128006947418290404 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/service-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/service-controller\" value_size:125 >> failure:<>>" with result "size:16" took too long (131.675411ms) to execute
	2021-08-13 20:25:41.542911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:25:45.653014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:25:55.652687 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:05.652864 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:15.652467 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:25.652962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:35.652126 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:45.652262 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:26:49.158610 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.002546493s) to execute
	2021-08-13 20:26:49.158668 W | etcdserver: read-only range request "key:\"/registry/pods/default/busybox-84b6686758-nhdx8\" " with result "range_response_count:1 size:2800" took too long (1.153301611s) to execute
	2021-08-13 20:26:49.158757 W | etcdserver: read-only range request "key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true " with result "range_response_count:0 size:7" took too long (647.806294ms) to execute
	2021-08-13 20:26:49.158847 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true " with result "range_response_count:0 size:7" took too long (562.513544ms) to execute
	2021-08-13 20:26:51.525821 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.370048121s) to execute
	2021-08-13 20:26:51.525938 W | etcdserver: request "header:<ID:8128006947418291397 > lease_revoke:<id:70cc7b4130c63c0e>" with result "size:29" took too long (742.384003ms) to execute
	2021-08-13 20:26:51.526103 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:2 size:11388" took too long (651.560908ms) to execute
	2021-08-13 20:26:52.460039 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (303.985859ms) to execute
	2021-08-13 20:26:52.460062 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1128" took too long (588.281218ms) to execute
	2021-08-13 20:26:53.186916 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true " with result "range_response_count:0 size:7" took too long (208.000149ms) to execute
	
	* 
	* ==> kernel <==
	*  20:26:53 up  1:09,  0 users,  load average: 0.83, 0.99, 0.82
	Linux multinode-20210813202501-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [d03677368b4fc8505ede42659d596734444bbadce38d84eab2125a4bf8b9ba86] <==
	* Trace[1143279136]: [1.819353887s] [1.819353887s] END
	I0813 20:25:41.063380       1 trace.go:205] Trace[685914516]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/kube-controller-manager,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:25:39.098) (total time: 1964ms):
	Trace[685914516]: ---"About to write a response" 1964ms (20:25:00.063)
	Trace[685914516]: [1.964416929s] [1.964416929s] END
	I0813 20:25:41.503520       1 trace.go:205] Trace[498347605]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:25:39.585) (total time: 1918ms):
	Trace[498347605]: [1.918094792s] [1.918094792s] END
	I0813 20:25:44.861910       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:25:45.511991       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:25:59.591540       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:25:59.591588       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:25:59.591598       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:26:36.819347       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:26:36.819385       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:26:36.819393       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:26:49.159746       1 trace.go:205] Trace[554295284]: "Get" url:/api/v1/namespaces/default/pods/busybox-84b6686758-nhdx8,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:26:48.004) (total time: 1154ms):
	Trace[554295284]: ---"About to write a response" 1154ms (20:26:00.159)
	Trace[554295284]: [1.154831969s] [1.154831969s] END
	I0813 20:26:51.526777       1 trace.go:205] Trace[1061757879]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 20:26:50.874) (total time: 652ms):
	Trace[1061757879]: [652.667592ms] [652.667592ms] END
	I0813 20:26:51.527266       1 trace.go:205] Trace[1071543886]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.3,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:26:50.874) (total time: 653ms):
	Trace[1071543886]: ---"Listing from storage done" 652ms (20:26:00.526)
	Trace[1071543886]: [653.165856ms] [653.165856ms] END
	I0813 20:26:52.460601       1 trace.go:205] Trace[1361948331]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:26:51.871) (total time: 589ms):
	Trace[1361948331]: ---"About to write a response" 589ms (20:26:00.460)
	Trace[1361948331]: [589.26253ms] [589.26253ms] END
	
	* 
	* ==> kube-controller-manager [3154784c683c30e61357542bff958b2f49e1c3ebb401ee6f779fddc258fb80c8] <==
	* I0813 20:25:44.866747       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 20:25:44.909231       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:25:44.992011       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:25:45.008243       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:25:45.008275       1 shared_informer.go:247] Caches are synced for HPA 
	I0813 20:25:45.113859       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:25:45.408980       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:25:45.409003       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:25:45.458308       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:25:45.517028       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2k7g6"
	I0813 20:25:45.518562       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5qfxb"
	I0813 20:25:45.563465       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-cswgt"
	I0813 20:25:45.566945       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-z5bmn"
	I0813 20:25:45.582256       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-cswgt"
	W0813 20:25:45.612594       1 endpointslice_controller.go:305] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	W0813 20:26:26.106525       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210813202501-13784-m02" does not exist
	I0813 20:26:26.115654       1 range_allocator.go:373] Set node multinode-20210813202501-13784-m02 PodCIDR to [10.244.1.0/24]
	I0813 20:26:26.120673       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pbrwr"
	I0813 20:26:26.120697       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6wjcn"
	W0813 20:26:29.665823       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210813202501-13784-m02. Assuming now as a timestamp.
	I0813 20:26:29.665860       1 event.go:291] "Event occurred" object="multinode-20210813202501-13784-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210813202501-13784-m02 event: Registered Node multinode-20210813202501-13784-m02 in Controller"
	I0813 20:26:38.661454       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0813 20:26:38.666767       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-7gjcw"
	I0813 20:26:38.669625       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-nhdx8"
	I0813 20:26:39.676474       1 event.go:291] "Event occurred" object="default/busybox-84b6686758-7gjcw" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-84b6686758-7gjcw"
	
	* 
	* ==> kube-proxy [7ba027d7f6880405746dddb514903eec1d9ede50cf586af47240ed1a43ed4673] <==
	* I0813 20:25:46.758829       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:25:46.758906       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:25:46.758949       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:25:46.778421       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:25:46.778467       1 server_others.go:212] Using iptables Proxier.
	I0813 20:25:46.778480       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:25:46.778493       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:25:46.778888       1 server.go:643] Version: v1.21.3
	I0813 20:25:46.779543       1 config.go:224] Starting endpoint slice config controller
	I0813 20:25:46.779596       1 config.go:315] Starting service config controller
	I0813 20:25:46.779637       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:25:46.779598       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:25:46.782028       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:25:46.782939       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:25:46.882812       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:25:46.882827       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [61a5ae23d8ddbfd2ad8619d0e4a9eec8f186f3625565a78a7b47cfb35dde29e6] <==
	* W0813 20:25:24.176274       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:25:24.176283       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:25:24.270795       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:25:24.270900       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:25:24.270913       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:25:24.270929       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:25:24.273274       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:25:24.273650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:25:24.273835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:24.273983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:25:24.274106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:24.274240       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:25:24.274350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:25:24.274610       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:25:24.274653       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:25:24.274740       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:24.274971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:25:24.275035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:24.275119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:25:24.275687       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:25:25.144264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:25:25.144267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:25:25.154505       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:25:25.332540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0813 20:25:25.671735       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:25:03 UTC, end at Fri 2021-08-13 20:26:54 UTC. --
	Aug 13 20:25:45 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:45.699276    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f974e6cf-6607-465e-9a82-c175afed7c99-cni-cfg\") pod \"kindnet-2k7g6\" (UID: \"f974e6cf-6607-465e-9a82-c175afed7c99\") "
	Aug 13 20:25:45 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:45.699305    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/098e1fdf-be73-4c00-af36-bb0432215045-lib-modules\") pod \"kube-proxy-5qfxb\" (UID: \"098e1fdf-be73-4c00-af36-bb0432215045\") "
	Aug 13 20:25:45 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:45.699331    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/098e1fdf-be73-4c00-af36-bb0432215045-kube-proxy\") pod \"kube-proxy-5qfxb\" (UID: \"098e1fdf-be73-4c00-af36-bb0432215045\") "
	Aug 13 20:25:45 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:45.699357    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/098e1fdf-be73-4c00-af36-bb0432215045-xtables-lock\") pod \"kube-proxy-5qfxb\" (UID: \"098e1fdf-be73-4c00-af36-bb0432215045\") "
	Aug 13 20:25:45 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:45.699382    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/217d65d1-6fe4-48d9-954f-7246653dbdd4-config-volume\") pod \"coredns-558bd4d5db-z5bmn\" (UID: \"217d65d1-6fe4-48d9-954f-7246653dbdd4\") "
	Aug 13 20:25:45 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:45.699409    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f974e6cf-6607-465e-9a82-c175afed7c99-xtables-lock\") pod \"kindnet-2k7g6\" (UID: \"f974e6cf-6607-465e-9a82-c175afed7c99\") "
	Aug 13 20:25:45 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:45.699440    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsv6k\" (UniqueName: \"kubernetes.io/projected/f974e6cf-6607-465e-9a82-c175afed7c99-kube-api-access-vsv6k\") pod \"kindnet-2k7g6\" (UID: \"f974e6cf-6607-465e-9a82-c175afed7c99\") "
	Aug 13 20:25:45 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:45.699468    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q45x\" (UniqueName: \"kubernetes.io/projected/217d65d1-6fe4-48d9-954f-7246653dbdd4-kube-api-access-5q45x\") pod \"coredns-558bd4d5db-z5bmn\" (UID: \"217d65d1-6fe4-48d9-954f-7246653dbdd4\") "
	Aug 13 20:25:46 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:46.282035    1595 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:25:46 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:46.403829    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2nsr\" (UniqueName: \"kubernetes.io/projected/ee46d1da-a4a7-46b9-9ebe-6f53aef7c220-kube-api-access-m2nsr\") pod \"storage-provisioner\" (UID: \"ee46d1da-a4a7-46b9-9ebe-6f53aef7c220\") "
	Aug 13 20:25:46 multinode-20210813202501-13784 kubelet[1595]: I0813 20:25:46.403903    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ee46d1da-a4a7-46b9-9ebe-6f53aef7c220-tmp\") pod \"storage-provisioner\" (UID: \"ee46d1da-a4a7-46b9-9ebe-6f53aef7c220\") "
	Aug 13 20:25:52 multinode-20210813202501-13784 kubelet[1595]: E0813 20:25:52.876549    1595 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:25:56 multinode-20210813202501-13784 kubelet[1595]: E0813 20:25:56.767130    1595 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-z5bmn_kube-system_217d65d1-6fe4-48d9-954f-7246653dbdd4_0(2b09d149e97cb28c409bceaaa4e3824ae1b9c78b0d55b37748dc7aa93c05dece): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 20:25:56 multinode-20210813202501-13784 kubelet[1595]: E0813 20:25:56.767207    1595 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-z5bmn_kube-system_217d65d1-6fe4-48d9-954f-7246653dbdd4_0(2b09d149e97cb28c409bceaaa4e3824ae1b9c78b0d55b37748dc7aa93c05dece): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-z5bmn"
	Aug 13 20:25:56 multinode-20210813202501-13784 kubelet[1595]: E0813 20:25:56.767232    1595 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-z5bmn_kube-system_217d65d1-6fe4-48d9-954f-7246653dbdd4_0(2b09d149e97cb28c409bceaaa4e3824ae1b9c78b0d55b37748dc7aa93c05dece): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-z5bmn"
	Aug 13 20:25:56 multinode-20210813202501-13784 kubelet[1595]: E0813 20:25:56.767308    1595 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-z5bmn_kube-system(217d65d1-6fe4-48d9-954f-7246653dbdd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-z5bmn_kube-system(217d65d1-6fe4-48d9-954f-7246653dbdd4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-z5bmn_kube-system_217d65d1-6fe4-48d9-954f-7246653dbdd4_0(2b09d149e97cb28c409bceaaa4e3824ae1b9c78b0d55b37748dc7aa93c05dece): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-z5bmn" podUID=217d65d1-6fe4-48d9-954f-7246653dbdd4
	Aug 13 20:26:02 multinode-20210813202501-13784 kubelet[1595]: E0813 20:26:02.930289    1595 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:26:12 multinode-20210813202501-13784 kubelet[1595]: E0813 20:26:12.985549    1595 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:26:23 multinode-20210813202501-13784 kubelet[1595]: E0813 20:26:23.048156    1595 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:26:33 multinode-20210813202501-13784 kubelet[1595]: E0813 20:26:33.109615    1595 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:26:38 multinode-20210813202501-13784 kubelet[1595]: W0813 20:26:38.015712    1595 container.go:586] Failed to update stats for container "/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3": /sys/fs/cgroup/cpuset/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/cpuset.cpus found to be empty, continuing to push stats
	Aug 13 20:26:38 multinode-20210813202501-13784 kubelet[1595]: I0813 20:26:38.674638    1595 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:26:38 multinode-20210813202501-13784 kubelet[1595]: I0813 20:26:38.852275    1595 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xggv\" (UniqueName: \"kubernetes.io/projected/8a720f23-12e0-41c3-a6ef-5914cb9f39ab-kube-api-access-7xggv\") pod \"busybox-84b6686758-nhdx8\" (UID: \"8a720f23-12e0-41c3-a6ef-5914cb9f39ab\") "
	Aug 13 20:26:43 multinode-20210813202501-13784 kubelet[1595]: E0813 20:26:43.172952    1595 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:26:53 multinode-20210813202501-13784 kubelet[1595]: E0813 20:26:53.239127    1595 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3\": RecentStats: unable to find data in memory cache]"
	
	* 
	* ==> storage-provisioner [5d1fa131ada614a76a423ba15a07014dcf871b59a15fc956ca886dc36365fa96] <==
	* I0813 20:25:47.523718       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:25:47.531589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:25:47.531633       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:25:47.539025       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:25:47.539117       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f405ffe-cb21-43e1-84bf-fea2d9ac433a", APIVersion:"v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210813202501-13784_3c0f4477-7311-42f6-95c7-5765475a1b4a became leader
	I0813 20:25:47.539134       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210813202501-13784_3c0f4477-7311-42f6-95c7-5765475a1b4a!
	I0813 20:25:47.640064       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210813202501-13784_3c0f4477-7311-42f6-95c7-5765475a1b4a!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210813202501-13784 -n multinode-20210813202501-13784
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210813202501-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20210813202501-13784 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20210813202501-13784 describe pod : exit status 1 (47.648709ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20210813202501-13784 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (7.71s)

                                                
                                    
x
+
TestPreload (158.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813203431-13784 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.0
E0813 20:36:16.135715   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813203431-13784 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.0: (1m52.323074103s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813203431-13784 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210813203431-13784 -- sudo crictl pull busybox: (5.476803629s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813203431-13784 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813203431-13784 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.3: (33.901008841s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813203431-13784 -- sudo crictl image ls
preload_test.go:85: Expected to find busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:613: *** TestPreload FAILED at 2021-08-13 20:37:03.948926542 +0000 UTC m=+1753.870363909
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect test-preload-20210813203431-13784
helpers_test.go:236: (dbg) docker inspect test-preload-20210813203431-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "790e92cd961e0873caa0581a97ed40374737911f7c6ac85e0e570f13062f8296",
	        "Created": "2021-08-13T20:34:33.975482197Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 136823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:34:34.566682469Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/790e92cd961e0873caa0581a97ed40374737911f7c6ac85e0e570f13062f8296/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/790e92cd961e0873caa0581a97ed40374737911f7c6ac85e0e570f13062f8296/hostname",
	        "HostsPath": "/var/lib/docker/containers/790e92cd961e0873caa0581a97ed40374737911f7c6ac85e0e570f13062f8296/hosts",
	        "LogPath": "/var/lib/docker/containers/790e92cd961e0873caa0581a97ed40374737911f7c6ac85e0e570f13062f8296/790e92cd961e0873caa0581a97ed40374737911f7c6ac85e0e570f13062f8296-json.log",
	        "Name": "/test-preload-20210813203431-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20210813203431-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20210813203431-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3de4cef1d20a359b490e48640428781d122e15bf89608d62e821f3b70f9a0d3e-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3de4cef1d20a359b490e48640428781d122e15bf89608d62e821f3b70f9a0d3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3de4cef1d20a359b490e48640428781d122e15bf89608d62e821f3b70f9a0d3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3de4cef1d20a359b490e48640428781d122e15bf89608d62e821f3b70f9a0d3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20210813203431-13784",
	                "Source": "/var/lib/docker/volumes/test-preload-20210813203431-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20210813203431-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20210813203431-13784",
	                "name.minikube.sigs.k8s.io": "test-preload-20210813203431-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a24ebe7ff28df7deac06f0a8a793f4a3e409a047421249ab8a75e6b659b0f315",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32857"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32856"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32853"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32855"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32854"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a24ebe7ff28d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20210813203431-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "790e92cd961e"
	                    ],
	                    "NetworkID": "9fe05d214ab0c164e817d62b5c415a365c5f6cdcc756b38031a6be5efd7b19d5",
	                    "EndpointID": "b90c0e1241f6981b7775e95eca84cc3257f7923f1b728cb773da6ea14e12dec6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-20210813203431-13784 -n test-preload-20210813203431-13784
helpers_test.go:245: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-20210813203431-13784 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p test-preload-20210813203431-13784 logs -n 25: (1.015808751s)
helpers_test.go:253: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                             |              Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| kubectl | -p                                                          | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:26:47 UTC | Fri, 13 Aug 2021 20:26:47 UTC |
	|         | multinode-20210813202501-13784                              |                                    |         |         |                               |                               |
	|         | -- exec                                                     |                                    |         |         |                               |                               |
	|         | busybox-84b6686758-nhdx8                                    |                                    |         |         |                               |                               |
	|         | -- sh -c nslookup                                           |                                    |         |         |                               |                               |
	|         | host.minikube.internal | awk                                |                                    |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                     |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:26:49 UTC | Fri, 13 Aug 2021 20:26:54 UTC |
	|         | logs -n 25                                                  |                                    |         |         |                               |                               |
	| node    | add -p                                                      | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:26:54 UTC | Fri, 13 Aug 2021 20:27:20 UTC |
	|         | multinode-20210813202501-13784                              |                                    |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                      |                                    |         |         |                               |                               |
	| profile | list --output json                                          | minikube                           | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:21 UTC | Fri, 13 Aug 2021 20:27:21 UTC |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:22 UTC | Fri, 13 Aug 2021 20:27:22 UTC |
	|         | cp testdata/cp-test.txt                                     |                                    |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                    |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:22 UTC | Fri, 13 Aug 2021 20:27:22 UTC |
	|         | ssh sudo cat                                                |                                    |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                    |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784 cp testdata/cp-test.txt      | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:22 UTC | Fri, 13 Aug 2021 20:27:23 UTC |
	|         | multinode-20210813202501-13784-m02:/home/docker/cp-test.txt |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:23 UTC | Fri, 13 Aug 2021 20:27:23 UTC |
	|         | ssh -n                                                      |                                    |         |         |                               |                               |
	|         | multinode-20210813202501-13784-m02                          |                                    |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784 cp testdata/cp-test.txt      | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:23 UTC | Fri, 13 Aug 2021 20:27:23 UTC |
	|         | multinode-20210813202501-13784-m03:/home/docker/cp-test.txt |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:23 UTC | Fri, 13 Aug 2021 20:27:23 UTC |
	|         | ssh -n                                                      |                                    |         |         |                               |                               |
	|         | multinode-20210813202501-13784-m03                          |                                    |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:23 UTC | Fri, 13 Aug 2021 20:27:25 UTC |
	|         | node stop m03                                               |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:26 UTC | Fri, 13 Aug 2021 20:27:57 UTC |
	|         | node start m03                                              |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	| stop    | -p                                                          | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:27:58 UTC | Fri, 13 Aug 2021 20:28:40 UTC |
	|         | multinode-20210813202501-13784                              |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:28:40 UTC | Fri, 13 Aug 2021 20:30:08 UTC |
	|         | multinode-20210813202501-13784                              |                                    |         |         |                               |                               |
	|         | --wait=true -v=8                                            |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:08 UTC | Fri, 13 Aug 2021 20:30:13 UTC |
	|         | node delete m03                                             |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:13 UTC | Fri, 13 Aug 2021 20:30:54 UTC |
	|         | stop                                                        |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:55 UTC | Fri, 13 Aug 2021 20:32:03 UTC |
	|         | multinode-20210813202501-13784                              |                                    |         |         |                               |                               |
	|         | --wait=true -v=8                                            |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	|         | --driver=docker                                             |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210813202501-13784-m03 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:32:04 UTC | Fri, 13 Aug 2021 20:32:32 UTC |
	|         | multinode-20210813202501-13784-m03                          |                                    |         |         |                               |                               |
	|         | --driver=docker                                             |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	| delete  | -p                                                          | multinode-20210813202501-13784-m03 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:32:32 UTC | Fri, 13 Aug 2021 20:32:35 UTC |
	|         | multinode-20210813202501-13784-m03                          |                                    |         |         |                               |                               |
	| -p      | multinode-20210813202501-13784                              | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:32:35 UTC | Fri, 13 Aug 2021 20:32:36 UTC |
	|         | logs -n 25                                                  |                                    |         |         |                               |                               |
	| delete  | -p                                                          | multinode-20210813202501-13784     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:32:37 UTC | Fri, 13 Aug 2021 20:32:42 UTC |
	|         | multinode-20210813202501-13784                              |                                    |         |         |                               |                               |
	| start   | -p                                                          | test-preload-20210813203431-13784  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:34:32 UTC | Fri, 13 Aug 2021 20:36:24 UTC |
	|         | test-preload-20210813203431-13784                           |                                    |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                             |                                    |         |         |                               |                               |
	|         | --wait=true --preload=false                                 |                                    |         |         |                               |                               |
	|         | --driver=docker                                             |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0                                |                                    |         |         |                               |                               |
	| ssh     | -p                                                          | test-preload-20210813203431-13784  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:24 UTC | Fri, 13 Aug 2021 20:36:29 UTC |
	|         | test-preload-20210813203431-13784                           |                                    |         |         |                               |                               |
	|         | -- sudo crictl pull busybox                                 |                                    |         |         |                               |                               |
	| start   | -p                                                          | test-preload-20210813203431-13784  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:29 UTC | Fri, 13 Aug 2021 20:37:03 UTC |
	|         | test-preload-20210813203431-13784                           |                                    |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                             |                                    |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker                            |                                    |         |         |                               |                               |
	|         |  --container-runtime=crio                                   |                                    |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3                                |                                    |         |         |                               |                               |
	| ssh     | -p                                                          | test-preload-20210813203431-13784  | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:03 UTC | Fri, 13 Aug 2021 20:37:03 UTC |
	|         | test-preload-20210813203431-13784                           |                                    |         |         |                               |                               |
	|         | -- sudo crictl image ls                                     |                                    |         |         |                               |                               |
	|---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:36:29
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:36:29.813787  141726 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:36:29.813855  141726 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:36:29.813859  141726 out.go:311] Setting ErrFile to fd 2...
	I0813 20:36:29.813862  141726 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:36:29.813954  141726 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:36:29.814167  141726 out.go:305] Setting JSON to false
	I0813 20:36:29.850947  141726 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":4752,"bootTime":1628882237,"procs":199,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:36:29.851049  141726 start.go:121] virtualization: kvm guest
	I0813 20:36:29.853699  141726 out.go:177] * [test-preload-20210813203431-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:36:29.855175  141726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:36:29.853859  141726 notify.go:169] Checking for updates...
	I0813 20:36:29.856631  141726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:36:29.857957  141726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:36:29.859337  141726 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:36:29.859773  141726 config.go:177] Loaded profile config "test-preload-20210813203431-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0813 20:36:29.861427  141726 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 20:36:29.861462  141726 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:36:29.909687  141726 docker.go:132] docker version: linux-19.03.15
	I0813 20:36:29.909783  141726 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:36:29.990169  141726 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-13 20:36:29.946278743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:36:29.990273  141726 docker.go:244] overlay module found
	I0813 20:36:29.992172  141726 out.go:177] * Using the docker driver based on existing profile
	I0813 20:36:29.992194  141726 start.go:278] selected driver: docker
	I0813 20:36:29.992200  141726 start.go:751] validating driver "docker" against &{Name:test-preload-20210813203431-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20210813203431-13784 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:36:29.992310  141726 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:36:29.992357  141726 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:36:29.992391  141726 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:36:29.993696  141726 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:36:29.994545  141726 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:36:30.071820  141726 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-13 20:36:30.029496428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:36:30.071949  141726 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:36:30.071979  141726 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:36:30.074396  141726 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:36:30.074499  141726 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:36:30.074526  141726 cni.go:93] Creating CNI manager for ""
	I0813 20:36:30.074536  141726 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:36:30.074548  141726 start_flags.go:277] config:
	{Name:test-preload-20210813203431-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813203431-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:36:30.076192  141726 out.go:177] * Starting control plane node test-preload-20210813203431-13784 in cluster test-preload-20210813203431-13784
	I0813 20:36:30.076227  141726 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:36:30.077544  141726 out.go:177] * Pulling base image ...
	I0813 20:36:30.077571  141726 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0813 20:36:30.077675  141726 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:36:30.163132  141726 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:36:30.163158  141726 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	W0813 20:36:30.234732  141726 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0813 20:36:30.234950  141726 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/config.json ...
	I0813 20:36:30.235011  141726 cache.go:108] acquiring lock: {Name:mkba69b0e6f833bbc3169832b699a2072359fe89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235034  141726 cache.go:108] acquiring lock: {Name:mkdb86f504045653cb4c3c832a80dc2d1df44d5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235098  141726 cache.go:108] acquiring lock: {Name:mk2bb04c9314ba6b8b4a6d4993507a92a5b6584f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235152  141726 cache.go:108] acquiring lock: {Name:mkd54d590a0b31658d671b55f9cd8723f20ab55f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235181  141726 cache.go:108] acquiring lock: {Name:mkdb560cd2dcc557d279e1bc5428d1312ea750ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235194  141726 cache.go:108] acquiring lock: {Name:mkb7a1f68bff3ac15aa63313333156cb053d897e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235242  141726 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:36:30.235218  141726 cache.go:108] acquiring lock: {Name:mk8c34a28406a13c284a2e9147cf6fa90ac1d4b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235243  141726 cache.go:108] acquiring lock: {Name:mkaacd7607e2526208a4e774ed0834b86580f6d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235275  141726 start.go:313] acquiring machines lock for test-preload-20210813203431-13784: {Name:mk86e3e130081b2a4b3221746d7108dd7bcbad0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235281  141726 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:36:30.235234  141726 cache.go:108] acquiring lock: {Name:mk67d5b6dee57e2b9c77f5a3a549df126956854b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235292  141726 cache.go:108] acquiring lock: {Name:mkf49ffbb332301f49bb0b6961b0fb2c9c638317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:36:30.235304  141726 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 301.814µs
	I0813 20:36:30.235311  141726 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:36:30.235319  141726 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:36:30.235752  141726 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0813 20:36:30.235784  141726 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 763.76µs
	I0813 20:36:30.235807  141726 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0813 20:36:30.235835  141726 start.go:317] acquired machines lock for "test-preload-20210813203431-13784" in 548.17µs
	I0813 20:36:30.235843  141726 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:36:30.235854  141726 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:36:30.235865  141726 fix.go:55] fixHost starting: 
	I0813 20:36:30.235863  141726 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
	I0813 20:36:30.235864  141726 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 624.704µs
	I0813 20:36:30.235889  141726 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:36:30.235893  141726 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 834.503µs
	I0813 20:36:30.235916  141726 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
	I0813 20:36:30.236019  141726 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:36:30.236042  141726 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 751.457µs
	I0813 20:36:30.236031  141726 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:36:30.236053  141726 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:36:30.236116  141726 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:36:30.236246  141726 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 20:36:30.236277  141726 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 1.139707ms
	I0813 20:36:30.236289  141726 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 20:36:30.236330  141726 cli_runner.go:115] Run: docker container inspect test-preload-20210813203431-13784 --format={{.State.Status}}
	I0813 20:36:30.236367  141726 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:36:30.237197  141726 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:36:30.237223  141726 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:36:30.237277  141726 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:36:30.237383  141726 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:36:30.274600  141726 fix.go:108] recreateIfNeeded on test-preload-20210813203431-13784: state=Running err=<nil>
	W0813 20:36:30.274648  141726 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:36:30.277350  141726 out.go:177] * Updating the running docker "test-preload-20210813203431-13784" container ...
	I0813 20:36:30.277390  141726 machine.go:88] provisioning docker machine ...
	I0813 20:36:30.277410  141726 ubuntu.go:169] provisioning hostname "test-preload-20210813203431-13784"
	I0813 20:36:30.277468  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:36:30.315449  141726 main.go:130] libmachine: Using SSH client type: native
	I0813 20:36:30.315665  141726 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0813 20:36:30.315686  141726 main.go:130] libmachine: About to run SSH command:
	sudo hostname test-preload-20210813203431-13784 && echo "test-preload-20210813203431-13784" | sudo tee /etc/hostname
	I0813 20:36:30.448820  141726 main.go:130] libmachine: SSH cmd err, output: <nil>: test-preload-20210813203431-13784
	
	I0813 20:36:30.448896  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:36:30.488349  141726 main.go:130] libmachine: Using SSH client type: native
	I0813 20:36:30.488508  141726 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0813 20:36:30.488537  141726 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20210813203431-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20210813203431-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20210813203431-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:36:30.613674  141726 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:36:30.613709  141726 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:36:30.613736  141726 ubuntu.go:177] setting up certificates
	I0813 20:36:30.613749  141726 provision.go:83] configureAuth start
	I0813 20:36:30.613812  141726 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20210813203431-13784
	I0813 20:36:30.651948  141726 provision.go:138] copyHostCerts
	I0813 20:36:30.652010  141726 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:36:30.652021  141726 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:36:30.652076  141726 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:36:30.652148  141726 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:36:30.652158  141726 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:36:30.652180  141726 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:36:30.652227  141726 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:36:30.652234  141726 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:36:30.652252  141726 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:36:30.652294  141726 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.test-preload-20210813203431-13784 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20210813203431-13784]
	I0813 20:36:30.936054  141726 provision.go:172] copyRemoteCerts
	I0813 20:36:30.936109  141726 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:36:30.936143  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:36:30.974274  141726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813203431-13784/id_rsa Username:docker}
	I0813 20:36:31.064814  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:36:31.080781  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 20:36:31.096054  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:36:31.111126  141726 provision.go:86] duration metric: configureAuth took 497.360232ms
	I0813 20:36:31.111157  141726 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:36:31.111295  141726 config.go:177] Loaded profile config "test-preload-20210813203431-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.17.3
	I0813 20:36:31.111399  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:36:31.151086  141726 main.go:130] libmachine: Using SSH client type: native
	I0813 20:36:31.151246  141726 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0813 20:36:31.151263  141726 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:36:31.347640  141726 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0813 20:36:31.356230  141726 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0813 20:36:31.375372  141726 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0813 20:36:31.396345  141726 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0813 20:36:31.748444  141726 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:36:31.748474  141726 machine.go:91] provisioned docker machine in 1.471075281s
	I0813 20:36:31.748485  141726 start.go:267] post-start starting for "test-preload-20210813203431-13784" (driver="docker")
	I0813 20:36:31.748493  141726 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:36:31.748556  141726 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:36:31.748596  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:36:31.794923  141726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813203431-13784/id_rsa Username:docker}
	I0813 20:36:31.889877  141726 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:36:31.892839  141726 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:36:31.892866  141726 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:36:31.892879  141726 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:36:31.892887  141726 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:36:31.892899  141726 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:36:31.892967  141726 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:36:31.893075  141726 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:36:31.893214  141726 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:36:31.899898  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:36:31.918181  141726 start.go:270] post-start completed in 169.679768ms
	I0813 20:36:31.918242  141726 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:36:31.918284  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:36:31.965906  141726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813203431-13784/id_rsa Username:docker}
	I0813 20:36:32.061692  141726 fix.go:57] fixHost completed within 1.825819334s
	I0813 20:36:32.061721  141726 start.go:80] releasing machines lock for "test-preload-20210813203431-13784", held for 1.825875187s
	I0813 20:36:32.061805  141726 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20210813203431-13784
	I0813 20:36:32.100390  141726 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:36:32.100455  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:36:32.138115  141726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813203431-13784/id_rsa Username:docker}
	I0813 20:36:32.561294  141726 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 exists
	I0813 20:36:32.561340  141726 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3" took 2.326157383s
	I0813 20:36:32.561358  141726 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 succeeded
	I0813 20:36:32.653943  141726 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 exists
	I0813 20:36:32.653988  141726 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3" took 2.418861986s
	I0813 20:36:32.654000  141726 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 succeeded
	I0813 20:36:32.667895  141726 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 exists
	I0813 20:36:32.667934  141726 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3" took 2.432764406s
	I0813 20:36:32.667951  141726 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 succeeded
	I0813 20:36:34.299446  141726 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 exists
	I0813 20:36:34.299492  141726 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3" took 4.064341346s
	I0813 20:36:34.299505  141726 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 succeeded
	I0813 20:36:34.299523  141726 cache.go:88] Successfully saved all images to host disk.
	I0813 20:36:34.299614  141726 ssh_runner.go:149] Run: systemctl --version
	I0813 20:36:34.303589  141726 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:36:34.312745  141726 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:36:34.321062  141726 docker.go:153] disabling docker service ...
	I0813 20:36:34.321108  141726 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:36:34.329334  141726 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:36:34.337184  141726 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:36:34.447928  141726 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:36:34.553912  141726 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:36:34.562908  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:36:34.574590  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0813 20:36:34.581731  141726 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:36:34.581755  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:36:34.588915  141726 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:36:34.594447  141726 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:36:34.594485  141726 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:36:34.600710  141726 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:36:34.606261  141726 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:36:34.712763  141726 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:36:34.721935  141726 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:36:34.721988  141726 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:36:34.724970  141726 start.go:413] Will wait 60s for crictl version
	I0813 20:36:34.725020  141726 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:36:34.751796  141726 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:36:34.751862  141726 ssh_runner.go:149] Run: crio --version
	I0813 20:36:34.811962  141726 ssh_runner.go:149] Run: crio --version
	I0813 20:36:34.871437  141726 out.go:177] * Preparing Kubernetes v1.17.3 on CRI-O 1.20.3 ...
	I0813 20:36:34.871502  141726 cli_runner.go:115] Run: docker network inspect test-preload-20210813203431-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:36:34.908459  141726 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:36:34.911869  141726 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0813 20:36:34.911930  141726 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:36:34.938756  141726 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.17.3". assuming images are not preloaded.
	I0813 20:36:34.938777  141726 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.3 k8s.gcr.io/kube-controller-manager:v1.17.3 k8s.gcr.io/kube-scheduler:v1.17.3 k8s.gcr.io/kube-proxy:v1.17.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 20:36:34.938827  141726 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:36:34.938869  141726 image.go:133] retrieving image: k8s.gcr.io/pause:3.1
	I0813 20:36:34.938887  141726 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:36:34.938911  141726 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0813 20:36:34.938946  141726 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:36:34.938967  141726 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:36:34.938972  141726 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:36:34.939009  141726 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0813 20:36:34.938946  141726 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:36:34.939035  141726 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:36:34.942148  141726 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:36:34.942179  141726 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:36:34.942339  141726 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:36:34.951205  141726 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0813 20:36:34.951766  141726 image.go:171] found k8s.gcr.io/pause:3.1 locally: &{Image:0xc000be2560}
	I0813 20:36:34.951844  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0813 20:36:35.293763  141726 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000be20a0}
	I0813 20:36:35.293876  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:36:35.466229  141726 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc00068e2e0}
	I0813 20:36:35.466317  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:36:35.483572  141726 image.go:171] found k8s.gcr.io/coredns:1.6.5 locally: &{Image:0xc000cec080}
	I0813 20:36:35.483655  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0813 20:36:35.823219  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:36:35.827381  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:36:35.846146  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:36:35.847020  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:36:35.961102  141726 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.17.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.3" does not exist at hash "d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad" in container runtime
	I0813 20:36:35.961158  141726 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:36:35.961200  141726 ssh_runner.go:149] Run: which crictl
	I0813 20:36:35.974080  141726 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.17.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.3" does not exist at hash "90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" in container runtime
	I0813 20:36:35.974128  141726 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:36:35.974183  141726 ssh_runner.go:149] Run: which crictl
	I0813 20:36:35.979092  141726 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.17.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.3" does not exist at hash "b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302" in container runtime
	I0813 20:36:35.979145  141726 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:36:35.979186  141726 ssh_runner.go:149] Run: which crictl
	I0813 20:36:35.983722  141726 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.17.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.3" does not exist at hash "ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1" in container runtime
	I0813 20:36:35.983764  141726 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:36:35.983800  141726 ssh_runner.go:149] Run: which crictl
	I0813 20:36:35.983816  141726 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 20:36:35.983889  141726 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 20:36:35.983919  141726 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 20:36:36.012212  141726 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0813 20:36:36.012271  141726 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.17.3
	I0813 20:36:36.012300  141726 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 20:36:36.012312  141726 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0813 20:36:36.012369  141726 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0813 20:36:36.012392  141726 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 20:36:36.012414  141726 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 20:36:36.017686  141726 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.3': No such file or directory
	I0813 20:36:36.017715  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 --> /var/lib/minikube/images/kube-apiserver_v1.17.3 (50635776 bytes)
	I0813 20:36:36.083882  141726 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.3': No such file or directory
	I0813 20:36:36.083924  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 --> /var/lib/minikube/images/kube-controller-manager_v1.17.3 (48810496 bytes)
	I0813 20:36:36.084047  141726 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.3': No such file or directory
	I0813 20:36:36.084049  141726 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0813 20:36:36.084066  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 --> /var/lib/minikube/images/kube-scheduler_v1.17.3 (33822208 bytes)
	I0813 20:36:36.084142  141726 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 20:36:36.102472  141726 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.3': No such file or directory
	I0813 20:36:36.102503  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 --> /var/lib/minikube/images/kube-proxy_v1.17.3 (48706048 bytes)
	I0813 20:36:36.362178  141726 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 20:36:36.362243  141726 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 20:36:37.333708  141726 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc00068e180}
	I0813 20:36:37.333806  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:36:37.957276  141726 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc000e68120}
	I0813 20:36:37.957380  141726 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0813 20:36:38.150996  141726 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3: (1.788727327s)
	I0813 20:36:38.151023  141726 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 from cache
	I0813 20:36:38.151047  141726 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 20:36:38.151097  141726 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 20:36:41.396963  141726 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3: (3.245838058s)
	I0813 20:36:41.396990  141726 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 from cache
	I0813 20:36:41.397032  141726 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 20:36:41.397086  141726 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 20:36:44.345103  141726 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3: (2.947994465s)
	I0813 20:36:44.345129  141726 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 from cache
	I0813 20:36:44.345159  141726 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 20:36:44.345239  141726 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 20:36:46.091264  141726 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3: (1.745995944s)
	I0813 20:36:46.091305  141726 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 from cache
	I0813 20:36:46.091330  141726 cache_images.go:113] Successfully loaded all cached images
	I0813 20:36:46.091337  141726 cache_images.go:82] LoadImages completed in 11.152547389s
	I0813 20:36:46.091400  141726 ssh_runner.go:149] Run: crio config
	I0813 20:36:46.156106  141726 cni.go:93] Creating CNI manager for ""
	I0813 20:36:46.156130  141726 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:36:46.156142  141726 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:36:46.156156  141726 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.17.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20210813203431-13784 NodeName:test-preload-20210813203431-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:36:46.156296  141726 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "test-preload-20210813203431-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:36:46.156387  141726 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-20210813203431-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813203431-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:36:46.156442  141726 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.17.3
	I0813 20:36:46.163082  141726 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.3': No such file or directory
	
	Initiating transfer...
	I0813 20:36:46.163126  141726 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.3
	I0813 20:36:46.169556  141726 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubectl
	I0813 20:36:46.169556  141726 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubelet
	I0813 20:36:46.169557  141726 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubeadm
	I0813 20:36:47.320278  141726 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl
	I0813 20:36:47.320351  141726 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm
	I0813 20:36:47.324500  141726 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubeadm': No such file or directory
	I0813 20:36:47.324519  141726 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubectl': No such file or directory
	I0813 20:36:47.324531  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubectl --> /var/lib/minikube/binaries/v1.17.3/kubectl (43499520 bytes)
	I0813 20:36:47.324532  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubeadm --> /var/lib/minikube/binaries/v1.17.3/kubeadm (39346176 bytes)
	I0813 20:36:47.724493  141726 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:36:47.734391  141726 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:36:47.746974  141726 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet
	I0813 20:36:47.749819  141726 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubelet': No such file or directory
	I0813 20:36:47.749855  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.17.3/kubelet --> /var/lib/minikube/binaries/v1.17.3/kubelet (111584792 bytes)
	I0813 20:36:47.958047  141726 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:36:47.964428  141726 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (564 bytes)
	I0813 20:36:47.975860  141726 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:36:47.987354  141726 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0813 20:36:47.998524  141726 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:36:48.001220  141726 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784 for IP: 192.168.49.2
	I0813 20:36:48.001272  141726 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:36:48.001284  141726 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:36:48.001339  141726 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/client.key
	I0813 20:36:48.001356  141726 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/apiserver.key.dd3b5fb2
	I0813 20:36:48.001373  141726 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/proxy-client.key
	I0813 20:36:48.001469  141726 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:36:48.001536  141726 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:36:48.001549  141726 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:36:48.001577  141726 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:36:48.001607  141726 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:36:48.001632  141726 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:36:48.001677  141726 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:36:48.002661  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:36:48.017687  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:36:48.032685  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:36:48.047744  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:36:48.062758  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:36:48.077742  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:36:48.092898  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:36:48.108346  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:36:48.123621  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:36:48.138373  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:36:48.153579  141726 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:36:48.168417  141726 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:36:48.179472  141726 ssh_runner.go:149] Run: openssl version
	I0813 20:36:48.183818  141726 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:36:48.190461  141726 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:36:48.193368  141726 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:36:48.193427  141726 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:36:48.197828  141726 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:36:48.203775  141726 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:36:48.210471  141726 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:36:48.213185  141726 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:36:48.213226  141726 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:36:48.217554  141726 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:36:48.223555  141726 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:36:48.230254  141726 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:36:48.232994  141726 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:36:48.233037  141726 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:36:48.237497  141726 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:36:48.243402  141726 kubeadm.go:390] StartCluster: {Name:test-preload-20210813203431-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813203431-13784 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:36:48.243484  141726 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:36:48.243523  141726 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:36:48.266868  141726 cri.go:76] found id: "343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3"
	I0813 20:36:48.266890  141726 cri.go:76] found id: "e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05"
	I0813 20:36:48.266897  141726 cri.go:76] found id: "5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59"
	I0813 20:36:48.266909  141726 cri.go:76] found id: "15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f"
	I0813 20:36:48.266915  141726 cri.go:76] found id: "e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a"
	I0813 20:36:48.266922  141726 cri.go:76] found id: "c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b"
	I0813 20:36:48.266930  141726 cri.go:76] found id: "f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e"
	I0813 20:36:48.266934  141726 cri.go:76] found id: "c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3"
	I0813 20:36:48.266940  141726 cri.go:76] found id: ""
	I0813 20:36:48.266968  141726 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:36:48.304243  141726 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8","pid":2803,"status":"running","bundle":"/run/containers/storage/overlay-containers/0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8/userdata","rootfs":"/var/lib/containers/storage/overlay/da7fc6caa4f0dcf01dd3143396da925d1f08ae4c93e15532aa404b419b32d0df/merged","created":"2021-08-13T20:35:33.989767443Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"f9b02339b0ee74d5390e436e953f0aba\",\"kubernetes.io/config.seen\":\"2021-08-13T20:35:30.275898913Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-test-preload-20210813203431-13784_kube-system
_f9b02339b0ee74d5390e436e953f0aba_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:35:33.892400932Z","io.kubernetes.cri-o.HostName":"test-preload-20210813203431-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-test-preload-20210813203431-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210813203431-13784\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"f9b02339b0ee74d5390e436e953f0aba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210813203431-13784_f9b02339b0ee74d5390e436e953f0aba/0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae8
7aff3d1790def8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-test-preload-20210813203431-13784\",\"uid\":\"f9b02339b0ee74d5390e436e953f0aba\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/da7fc6caa4f0dcf01dd3143396da925d1f08ae4c93e15532aa404b419b32d0df/merged","io.kubernetes.cri-o.Name":"k8s_etcd-test-preload-20210813203431-13784_kube-system_f9b02339b0ee74d5390e436e953f0aba_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/r
un/containers/storage/overlay-containers/0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8/userdata/shm","io.kubernetes.pod.name":"etcd-test-preload-20210813203431-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f9b02339b0ee74d5390e436e953f0aba","kubernetes.io/config.hash":"f9b02339b0ee74d5390e436e953f0aba","kubernetes.io/config.seen":"2021-08-13T20:35:30.275898913Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f","pid":3773,"status":"running","bundle":"/run/containers/storage/overlay-containers/15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f/userdata","rootfs":"/var/lib/containers/storage/overlay/134fa8462d25f90b8e6e7594aa5de9e4645e143a2546e6713725a1d5325949ca/merged","created":"2021-08-13T20:35:56.581774206Z","annotations":{"io.container.manager":"cri-o","io.kubernet
es.container.hash":"dbaf924","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dbaf924\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:35:56.384989522Z","io.kubernetes.cri-o.Image":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.17.0","io.kubernetes.cri-o.ImageRef":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409
ed1400f19","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-m97cx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-m97cx_b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/134fa8462d25f90b8e6e7594aa5de9e4645e143a2546e6713725a1d5325949ca/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-m97cx_kube-system_b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-m97cx_kube-sy
stem_b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd/containers/kube-proxy/5f01522d\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/b9
d5cbf3-ffa0-44bc-b076-3d58a88de7bd/volumes/kubernetes.io~secret/kube-proxy-token-4nr8m\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-m97cx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd","kubernetes.io/config.seen":"2021-08-13T20:35:55.685933323Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6","pid":2640,"status":"running","bundle":"/run/containers/storage/overlay-containers/1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6/userdata","rootfs":"/var/lib/containers/storage/overlay/35b262f98faf75b7a5a313e120ed74f1a0b41fec87d71410c492f518b1b7dd0f/merged","created":"2021-08-13T20:35:33.70972105Z","annotations":{"component":"kube-apiserver","io.container.manager":"c
ri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"f8c1872d6958c845ffffb18f158fd9df\",\"kubernetes.io/config.seen\":\"2021-08-13T20:35:30.272399582Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-test-preload-20210813203431-13784_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:35:33.624733674Z","io.kubernetes.cri-o.HostName":"test-preload-20210813203431-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-test-preloa
d-20210813203431-13784","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"f8c1872d6958c845ffffb18f158fd9df\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210813203431-13784\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210813203431-13784_f8c1872d6958c845ffffb18f158fd9df/1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-test-preload-20210813203431-13784\",\"uid\":\"f8c1872d6958c845ffffb18f158fd9df\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/35b262f98faf75b7a5a313e120ed74f1a0b41fec87d71410c492f518b1b7dd0f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-test-preload-20210813203431-13784_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.N
amespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210813203431-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.hash":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.seen":"2021-08-13T20:35:30.272399582Z","kubernetes.io/config.source":"file","org.s
ystemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3","pid":4340,"status":"running","bundle":"/run/containers/storage/overlay-containers/343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3/userdata","rootfs":"/var/lib/containers/storage/overlay/c1ed2ee88c5a4a2664ec2d7c96e67a6a174819d985c27d408fff5a404de6e3f7/merged","created":"2021-08-13T20:36:17.905680245Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f9ae5aa3","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessa
gePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f9ae5aa3\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:36:17.768338647Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/co
redns:1.6.5","io.kubernetes.cri-o.ImageRef":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-6wwpm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-6wwpm_e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c1ed2ee88c5a4a2664ec2d7c96e67a6a174819d985c27d408fff5a404de6e3f7/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6955765f44-6wwpm_kube-system_e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b3b43b5323ab89f01
66ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6955765f44-6wwpm_kube-system_e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2/containers/coredns/751383c5\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2/volumes/kubernetes.io~secret/coredns-
token-jf4wt\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-6955765f44-6wwpm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2","kubernetes.io/config.seen":"2021-08-13T20:35:55.63926934Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59","pid":3923,"status":"running","bundle":"/run/containers/storage/overlay-containers/5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59/userdata","rootfs":"/var/lib/containers/storage/overlay/75634fe10443dd2c7f3f1ff9afc93db448223b61c27aa77746225a34743e2af7/merged","created":"2021-08-13T20:35:57.165733265Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3bd0b2be","io.kubernetes.container.name":"storage-provisio
ner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3bd0b2be\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:35:57.034330397Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kuberne
tes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8c86309a-234d-4821-bc24-9759501af6ef\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_8c86309a-234d-4821-bc24-9759501af6ef/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/75634fe10443dd2c7f3f1ff9afc93db448223b61c27aa77746225a34743e2af7/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_8c86309a-234d-4821-bc24-9759501af6ef_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_8
c86309a-234d-4821-bc24-9759501af6ef_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8c86309a-234d-4821-bc24-9759501af6ef/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8c86309a-234d-4821-bc24-9759501af6ef/containers/storage-provisioner/5d387a1a\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/8c86309a-234d-4821-bc24-9759501af6ef/volumes/kubernetes.io~secret/storage-provisioner-token-bj2kw\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8c86309a-2
34d-4821-bc24-9759501af6ef","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:35:56.59404681Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"80333a2c6db524c8596ae
ce9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57","pid":2712,"status":"running","bundle":"/run/containers/storage/overlay-containers/80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57/userdata","rootfs":"/var/lib/containers/storage/overlay/d49de4e98837c93e0cd2fdcdd7f7821055838740a92a7c6f6cb768189c879ae3/merged","created":"2021-08-13T20:35:33.857758192Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"kubernetes.io/config.seen\":\"2021-08-13T20:35:30.275225613Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-test-preload-20210813203431-13784_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kuber
netes.cri-o.Created":"2021-08-13T20:35:33.745379037Z","io.kubernetes.cri-o.HostName":"test-preload-20210813203431-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-test-preload-20210813203431-13784","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210813203431-13784\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210813203431-13784_bb577061a17ad23cfbbf52e9419bf32a/80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57.log","io.kubernetes.cri-o.Metadata":"
{\"name\":\"kube-scheduler-test-preload-20210813203431-13784\",\"uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d49de4e98837c93e0cd2fdcdd7f7821055838740a92a7c6f6cb768189c879ae3/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-test-preload-20210813203431-13784_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-co
ntainers/80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210813203431-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-13T20:35:30.275225613Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02","pid":2632,"status":"running","bundle":"/run/containers/storage/overlay-containers/844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02/userdata","rootfs":"/var/lib/containers/storage/overlay/6add21603a74ee98056dd863e8af748416af6fe1f2fabeddcf074a60a171c0ef/merged","created":"2021-08-13T20:35:33.701742968Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri
-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"01e1f4e495c3311ccc20368c1e385f74\",\"kubernetes.io/config.seen\":\"2021-08-13T20:35:30.274070512Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-test-preload-20210813203431-13784_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:35:33.626558822Z","io.kubernetes.cri-o.HostName":"test-preload-20210813203431-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-man
ager-test-preload-20210813203431-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210813203431-13784\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"01e1f4e495c3311ccc20368c1e385f74\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210813203431-13784_01e1f4e495c3311ccc20368c1e385f74/844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-test-preload-20210813203431-13784\",\"uid\":\"01e1f4e495c3311ccc20368c1e385f74\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6add21603a74ee98056dd863e8af748416af6fe1f2fabeddcf074a60a171c0ef/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-test-preload-20210813203431-13784_kube-syst
em_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210813203431-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.hash":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.seen":"2021-
08-13T20:35:30.274070512Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731","pid":3891,"status":"running","bundle":"/run/containers/storage/overlay-containers/871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731/userdata","rootfs":"/var/lib/containers/storage/overlay/4ad261886e1c7ecb4afaee75856c7efdd9edaf003c2dda79e3595ea0ee67bd69/merged","created":"2021-08-13T20:35:56.985825061Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io
/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-13T20:35:56.59404681Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_8c86309a-234d-4821-bc24-9759501af6ef_0","io.kubernetes.c
ri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:35:56.908575976Z","io.kubernetes.cri-o.HostName":"test-preload-20210813203431-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"8c86309a-234d-4821-bc24-9759501af6ef\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_8c86309a-234d-4821-bc24-9759501af6ef/871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731.log","io.kubernetes.cri-o.Metadata":"{\"na
me\":\"storage-provisioner\",\"uid\":\"8c86309a-234d-4821-bc24-9759501af6ef\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4ad261886e1c7ecb4afaee75856c7efdd9edaf003c2dda79e3595ea0ee67bd69/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_8c86309a-234d-4821-bc24-9759501af6ef_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/871becfd2877bf272e869d948d791b6a6beb48a3db419c
81a29118603725e731/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8c86309a-234d-4821-bc24-9759501af6ef","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:35:56.59404681Z","kubernetes.io/config.source":"api","org.systemd.property.Coll
ectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa","pid":4308,"status":"running","bundle":"/run/containers/storage/overlay-containers/b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa/userdata","rootfs":"/var/lib/containers/storage/overlay/566a81e1f3497e7b0e0f7445777e814e63bf8519d031be2adc3698574305d4c5/merged","created":"2021-08-13T20:36:17.729766996Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:35:55.63926934Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth727e793e\",\"mac\":\"5e:c9:b1:e4:d5:7b\"},{\"name\":\"eth0\",\"mac\":\"66:0b:2e:8e:16:30\",\"sandbox\":\"/var/run/netns/4f19c789-bb66-40b4-862a-2b982ef9c87b\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gate
way\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6955765f44-6wwpm_kube-system_e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:36:17.581921618Z","io.kubernetes.cri-o.HostName":"coredns-6955765f44-6wwpm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6955765f44-6wwpm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-6wwpm\",\"pod-t
emplate-hash\":\"6955765f44\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-6wwpm_e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2/b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6955765f44-6wwpm\",\"uid\":\"e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/566a81e1f3497e7b0e0f7445777e814e63bf8519d031be2adc3698574305d4c5/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6955765f44-6wwpm_kube-system_e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa/user
data/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa/userdata/shm","io.kubernetes.pod.name":"coredns-6955765f44-6wwpm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T20:35:55.63926934Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"6955765f44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3","pid":2719,"status":"running","bundle":"/run/containers/storage/overlay-containers/c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3/userdata","rootfs":"/var/lib/containers/
storage/overlay/baea4edb7f6ae597d338f95e0ceab59cb5cfdae158b65fcc9e276201a1cc0624/merged","created":"2021-08-13T20:35:33.893738275Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ec604138","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ec604138\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:35:33.758533649Z","io.kubernetes.cri-o.Image":"5eb3b7486872
441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.17.0","io.kubernetes.cri-o.ImageRef":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210813203431-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"01e1f4e495c3311ccc20368c1e385f74\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210813203431-13784_01e1f4e495c3311ccc20368c1e385f74/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/baea4edb7f6ae597d338f95e0ceab59cb5cfdae158b65fcc9e276201a1cc0624/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-test-preload-20210813203431
-13784_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-test-preload-20210813203431-13784_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/01e1f4e495c3311ccc20368c1e385f74/containers/kube-controller-manager/20553f49\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/01e1f4e495c3311cc
c20368c1e385f74/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210813203431-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"01e1f4e495c3311ccc20368
c1e385f74","kubernetes.io/config.hash":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.seen":"2021-08-13T20:35:30.274070512Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b","pid":2809,"status":"running","bundle":"/run/containers/storage/overlay-containers/c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b/userdata","rootfs":"/var/lib/containers/storage/overlay/d7582a78a551033244dab670672c2b9911f8e10b731edfce2a30b462d826934a/merged","created":"2021-08-13T20:35:34.101765563Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99930feb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":
"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99930feb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:35:33.905910553Z","io.kubernetes.cri-o.Image":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.17.0","io.kubernetes.cri-o.ImageRef":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210813203431-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kuberne
tes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210813203431-13784_bb577061a17ad23cfbbf52e9419bf32a/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d7582a78a551033244dab670672c2b9911f8e10b731edfce2a30b462d826934a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-test-preload-20210813203431-13784_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-test-preload-20210813203431-13784_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cr
i-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/containers/kube-scheduler/8bcab10b\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210813203431-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-13T20:35:30.275225613Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.prop
erty.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684","pid":3677,"status":"running","bundle":"/run/containers/storage/overlay-containers/cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684/userdata","rootfs":"/var/lib/containers/storage/overlay/5ae084823c629bf8458f011ae58cef584ee70edf2bc42a824e2dcf4ac5e92fc1/merged","created":"2021-08-13T20:35:56.257909179Z","annotations":{"app":"kindnet","controller-revision-hash":"59985d8787","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:35:55.684401202Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-js67l_kube-system_94f51ba9-9f3b-4616-b09e-629bdb72ae4f_0","io.kubernetes.cr
i-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:35:56.006487507Z","io.kubernetes.cri-o.HostName":"test-preload-20210813203431-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-js67l","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kindnet\",\"controller-revision-hash\":\"59985d8787\",\"app\":\"kindnet\",\"io.kubernetes.pod.uid\":\"94f51ba9-9f3b-4616-b09e-629bdb72ae4f\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"io.kubernetes.pod.name\":\"kindnet-js67l\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-js67l_94f51ba9-9f3b-4616-b09e-629bdb72ae4f/cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684.log","io.kuberne
tes.cri-o.Metadata":"{\"name\":\"kindnet-js67l\",\"uid\":\"94f51ba9-9f3b-4616-b09e-629bdb72ae4f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5ae084823c629bf8458f011ae58cef584ee70edf2bc42a824e2dcf4ac5e92fc1/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-js67l_kube-system_94f51ba9-9f3b-4616-b09e-629bdb72ae4f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/cd759c06791896805bb5ad11925f6db1
41b9a7adfa1320d183e9378219f6a684/userdata/shm","io.kubernetes.pod.name":"kindnet-js67l","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"94f51ba9-9f3b-4616-b09e-629bdb72ae4f","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-13T20:35:55.684401202Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7","pid":3692,"status":"running","bundle":"/run/containers/storage/overlay-containers/d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7/userdata","rootfs":"/var/lib/containers/storage/overlay/4f6a68d37a9313c8828627984733c66cc28166d97498a20bedc7a480afd65ee2/merged","created":"2021-08-13T20:35:56.257890122Z","annotations":{"controller-revision-hash":"68bd87b66","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/c
onfig.seen\":\"2021-08-13T20:35:55.685933323Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-m97cx_kube-system_b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:35:56.061987856Z","io.kubernetes.cri-o.HostName":"test-preload-20210813203431-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-m97cx","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-m97cx\",\"pod-
template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"68bd87b66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-m97cx_b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd/d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-m97cx\",\"uid\":\"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4f6a68d37a9313c8828627984733c66cc28166d97498a20bedc7a480afd65ee2/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-m97cx_kube-system_b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d30019c85c0e8304ad5ce323feb39e53ab90b
ade5463cf216997b335694faeb7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7/userdata/shm","io.kubernetes.pod.name":"kube-proxy-m97cx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:35:55.685933323Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05","pid":4057,"status":"running","bundle":"/run/containers/storage/overlay-containers/e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05/userdata","rootf
s":"/var/lib/containers/storage/overlay/b3dbc78ee327e0116c2aeb89b2d0eaee4bfbcada2664ee849a1472acafd56d8d/merged","created":"2021-08-13T20:36:04.721733573Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"edd4fffc","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"edd4fffc\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:36:04.607030808Z","io.kubernetes.cri-o.Image":"
docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-js67l\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"94f51ba9-9f3b-4616-b09e-629bdb72ae4f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-js67l_94f51ba9-9f3b-4616-b09e-629bdb72ae4f/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b3dbc78ee327e0116c2aeb89b2d0eaee4bfbcada2664ee849a1472acafd56d8d/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-js67l_kube-system_94f51ba9-9f3b-4616-b09e-629bdb72ae4f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overla
y-containers/cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-js67l_kube-system_94f51ba9-9f3b-4616-b09e-629bdb72ae4f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/94f51ba9-9f3b-4616-b09e-629bdb72ae4f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/94f51ba9-9f3b-4616-b09e-629bdb72ae4f/containers/kindnet-cni/5a3a11ca\",\"readonly\":false},{\"container_path\":\"/etc/cni/ne
t.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/94f51ba9-9f3b-4616-b09e-629bdb72ae4f/volumes/kubernetes.io~secret/kindnet-token-sfr2g\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-js67l","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"94f51ba9-9f3b-4616-b09e-629bdb72ae4f","kubernetes.io/config.seen":"2021-08-13T20:35:55.684401202Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a","pid":2899,"status":"running","bundle":"/run/containers/storage/overlay-containers/e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a/userdata","rootfs":"/var/lib/containers/storage/overlay/92d4de7cd3de5abaa35bb
440ab20387bec68bd7ab1d65549d346ee9f55390ef5/merged","created":"2021-08-13T20:35:34.357616343Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8ac11b2c","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8ac11b2c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:35:34.103810788Z","io.kubernetes.cri-o.Image":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","i
o.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210813203431-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f9b02339b0ee74d5390e436e953f0aba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210813203431-13784_f9b02339b0ee74d5390e436e953f0aba/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/92d4de7cd3de5abaa35bb440ab20387bec68bd7ab1d65549d346ee9f55390ef5/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-test-preload-20210813203431-13784_kube-system_f9b02339b0ee74d5390e436e953f0aba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8/userdata/re
solv.conf","io.kubernetes.cri-o.SandboxID":"0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8","io.kubernetes.cri-o.SandboxName":"k8s_etcd-test-preload-20210813203431-13784_kube-system_f9b02339b0ee74d5390e436e953f0aba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f9b02339b0ee74d5390e436e953f0aba/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f9b02339b0ee74d5390e436e953f0aba/containers/etcd/acf2277d\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-test-preload-20210813203431-13784","io.k
ubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f9b02339b0ee74d5390e436e953f0aba","kubernetes.io/config.hash":"f9b02339b0ee74d5390e436e953f0aba","kubernetes.io/config.seen":"2021-08-13T20:35:30.275898913Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e","pid":2726,"status":"running","bundle":"/run/containers/storage/overlay-containers/f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e/userdata","rootfs":"/var/lib/containers/storage/overlay/85cd7049217cff69825d3691cda12cfee782110dc48be5cc50884e40edb96b4e/merged","created":"2021-08-13T20:35:33.913753759Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ffc41559","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restar
tCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ffc41559\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:35:33.770582924Z","io.kubernetes.cri-o.Image":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.17.0","io.kubernetes.cri-o.ImageRef":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.
kubernetes.pod.name\":\"kube-apiserver-test-preload-20210813203431-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f8c1872d6958c845ffffb18f158fd9df\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210813203431-13784_f8c1872d6958c845ffffb18f158fd9df/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/85cd7049217cff69825d3691cda12cfee782110dc48be5cc50884e40edb96b4e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-test-preload-20210813203431-13784_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-test-pre
load-20210813203431-13784_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f8c1872d6958c845ffffb18f158fd9df/containers/kube-apiserver/ff22e236\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f8c1872d6958c845ffffb18f158fd9df/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/sh
are/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210813203431-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.hash":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.seen":"2021-08-13T20:35:30.272399582Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0813 20:36:48.304926  141726 cri.go:113] list returned 16 containers
	I0813 20:36:48.304938  141726 cri.go:116] container: {ID:0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8 Status:running}
	I0813 20:36:48.304949  141726 cri.go:118] skipping 0b9f1f6450653213b0ac639ba0de33bbec628e2d2b18c36ae87aff3d1790def8 - not in ps
	I0813 20:36:48.304954  141726 cri.go:116] container: {ID:15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f Status:running}
	I0813 20:36:48.304964  141726 cri.go:122] skipping {15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f running}: state = "running", want "paused"
	I0813 20:36:48.304976  141726 cri.go:116] container: {ID:1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6 Status:running}
	I0813 20:36:48.304981  141726 cri.go:118] skipping 1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6 - not in ps
	I0813 20:36:48.304988  141726 cri.go:116] container: {ID:343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3 Status:running}
	I0813 20:36:48.304996  141726 cri.go:122] skipping {343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3 running}: state = "running", want "paused"
	I0813 20:36:48.305004  141726 cri.go:116] container: {ID:5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59 Status:running}
	I0813 20:36:48.305009  141726 cri.go:122] skipping {5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59 running}: state = "running", want "paused"
	I0813 20:36:48.305015  141726 cri.go:116] container: {ID:80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57 Status:running}
	I0813 20:36:48.305019  141726 cri.go:118] skipping 80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57 - not in ps
	I0813 20:36:48.305025  141726 cri.go:116] container: {ID:844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02 Status:running}
	I0813 20:36:48.305030  141726 cri.go:118] skipping 844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02 - not in ps
	I0813 20:36:48.305035  141726 cri.go:116] container: {ID:871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731 Status:running}
	I0813 20:36:48.305040  141726 cri.go:118] skipping 871becfd2877bf272e869d948d791b6a6beb48a3db419c81a29118603725e731 - not in ps
	I0813 20:36:48.305046  141726 cri.go:116] container: {ID:b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa Status:running}
	I0813 20:36:48.305050  141726 cri.go:118] skipping b3b43b5323ab89f0166ea05e3d2502acf47ed8a286ce5e2e915d7c6578e472aa - not in ps
	I0813 20:36:48.305054  141726 cri.go:116] container: {ID:c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3 Status:running}
	I0813 20:36:48.305061  141726 cri.go:122] skipping {c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3 running}: state = "running", want "paused"
	I0813 20:36:48.305065  141726 cri.go:116] container: {ID:c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b Status:running}
	I0813 20:36:48.305072  141726 cri.go:122] skipping {c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b running}: state = "running", want "paused"
	I0813 20:36:48.305077  141726 cri.go:116] container: {ID:cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684 Status:running}
	I0813 20:36:48.305081  141726 cri.go:118] skipping cd759c06791896805bb5ad11925f6db141b9a7adfa1320d183e9378219f6a684 - not in ps
	I0813 20:36:48.305088  141726 cri.go:116] container: {ID:d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7 Status:running}
	I0813 20:36:48.305092  141726 cri.go:118] skipping d30019c85c0e8304ad5ce323feb39e53ab90bade5463cf216997b335694faeb7 - not in ps
	I0813 20:36:48.305097  141726 cri.go:116] container: {ID:e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05 Status:running}
	I0813 20:36:48.305101  141726 cri.go:122] skipping {e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05 running}: state = "running", want "paused"
	I0813 20:36:48.305110  141726 cri.go:116] container: {ID:e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a Status:running}
	I0813 20:36:48.305117  141726 cri.go:122] skipping {e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a running}: state = "running", want "paused"
	I0813 20:36:48.305121  141726 cri.go:116] container: {ID:f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e Status:running}
	I0813 20:36:48.305125  141726 cri.go:122] skipping {f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e running}: state = "running", want "paused"
	I0813 20:36:48.305164  141726 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:36:48.311966  141726 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:36:48.311994  141726 kubeadm.go:600] restartCluster start
	I0813 20:36:48.312041  141726 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:36:48.317900  141726 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:36:48.318584  141726 kubeconfig.go:93] found "test-preload-20210813203431-13784" server: "https://192.168.49.2:8443"
	I0813 20:36:48.318997  141726 kapi.go:59] client config for test-preload-20210813203431-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-202108
13203431-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:36:48.320567  141726 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:36:48.326586  141726 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-13 20:35:27.427112218 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-13 20:36:47.996791152 +0000
	@@ -40,7 +40,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.17.0
	+kubernetesVersion: v1.17.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0813 20:36:48.326599  141726 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:36:48.326610  141726 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:36:48.326647  141726 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:36:48.349206  141726 cri.go:76] found id: "343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3"
	I0813 20:36:48.349227  141726 cri.go:76] found id: "e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05"
	I0813 20:36:48.349238  141726 cri.go:76] found id: "5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59"
	I0813 20:36:48.349244  141726 cri.go:76] found id: "15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f"
	I0813 20:36:48.349250  141726 cri.go:76] found id: "e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a"
	I0813 20:36:48.349256  141726 cri.go:76] found id: "c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b"
	I0813 20:36:48.349261  141726 cri.go:76] found id: "f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e"
	I0813 20:36:48.349268  141726 cri.go:76] found id: "c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3"
	I0813 20:36:48.349275  141726 cri.go:76] found id: ""
	I0813 20:36:48.349282  141726 cri.go:221] Stopping containers: [343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3 e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05 5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59 15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3]
	I0813 20:36:48.349330  141726 ssh_runner.go:149] Run: which crictl
	I0813 20:36:48.351984  141726 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3 e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05 5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59 15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3
	I0813 20:36:49.726033  141726 ssh_runner.go:189] Completed: sudo /usr/bin/crictl stop 343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3 e54df6bf18cf422f03fad5d49d11decea8b78d1d80b90732872cbc4c3a8baa05 5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59 15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3: (1.374011228s)
	I0813 20:36:49.726104  141726 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:36:49.737321  141726 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:36:49.743772  141726 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5615 Aug 13 20:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5651 Aug 13 20:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 13 20:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5599 Aug 13 20:35 /etc/kubernetes/scheduler.conf
	
	I0813 20:36:49.743816  141726 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:36:49.750111  141726 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:36:49.756034  141726 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:36:49.762058  141726 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:36:49.768035  141726 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:36:49.776608  141726 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:36:49.776627  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:36:49.820163  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:36:50.269339  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:36:50.418546  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:36:50.474572  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:36:50.582173  141726 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:36:50.582240  141726 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:36:51.100630  141726 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:36:51.601075  141726 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:36:52.101064  141726 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:36:52.169387  141726 api_server.go:70] duration metric: took 1.587213101s to wait for apiserver process to appear ...
	I0813 20:36:52.169416  141726 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:36:52.169427  141726 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:36:55.872743  141726 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:36:55.872782  141726 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:36:56.373207  141726 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:36:56.377502  141726 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:36:56.377530  141726 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:36:56.872849  141726 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:36:56.877719  141726 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:36:56.877748  141726 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:36:57.373890  141726 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:36:57.378882  141726 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:36:57.384643  141726 api_server.go:139] control plane version: v1.17.3
	I0813 20:36:57.384663  141726 api_server.go:129] duration metric: took 5.215240858s to wait for apiserver health ...
	I0813 20:36:57.384673  141726 cni.go:93] Creating CNI manager for ""
	I0813 20:36:57.384679  141726 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:36:57.386910  141726 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:36:57.386962  141726 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:36:57.390607  141726 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.17.3/kubectl ...
	I0813 20:36:57.390634  141726 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:36:57.403082  141726 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.17.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:36:57.585980  141726 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:36:57.595234  141726 system_pods.go:59] 8 kube-system pods found
	I0813 20:36:57.595262  141726 system_pods.go:61] "coredns-6955765f44-6wwpm" [e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2] Running
	I0813 20:36:57.595267  141726 system_pods.go:61] "etcd-test-preload-20210813203431-13784" [187b7541-5b64-43c7-b999-fbcee534a8a9] Running
	I0813 20:36:57.595271  141726 system_pods.go:61] "kindnet-js67l" [94f51ba9-9f3b-4616-b09e-629bdb72ae4f] Running
	I0813 20:36:57.595280  141726 system_pods.go:61] "kube-apiserver-test-preload-20210813203431-13784" [94afd4a5-7933-4acf-aac1-574dad24199c] Pending
	I0813 20:36:57.595285  141726 system_pods.go:61] "kube-controller-manager-test-preload-20210813203431-13784" [b99a11fa-0100-4cd1-bbb8-1f17ac8ba9af] Pending
	I0813 20:36:57.595288  141726 system_pods.go:61] "kube-proxy-m97cx" [b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd] Running
	I0813 20:36:57.595292  141726 system_pods.go:61] "kube-scheduler-test-preload-20210813203431-13784" [ce5d62a3-a1e3-43fc-b562-a9609adac6ff] Pending
	I0813 20:36:57.595296  141726 system_pods.go:61] "storage-provisioner" [8c86309a-234d-4821-bc24-9759501af6ef] Running
	I0813 20:36:57.595301  141726 system_pods.go:74] duration metric: took 9.298954ms to wait for pod list to return data ...
	I0813 20:36:57.595308  141726 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:36:57.598204  141726 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:36:57.598238  141726 node_conditions.go:123] node cpu capacity is 8
	I0813 20:36:57.598254  141726 node_conditions.go:105] duration metric: took 2.941386ms to run NodePressure ...
	I0813 20:36:57.598274  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:36:57.813290  141726 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:36:57.816173  141726 kubeadm.go:746] kubelet initialised
	I0813 20:36:57.816190  141726 kubeadm.go:747] duration metric: took 2.879685ms waiting for restarted kubelet to initialise ...
	I0813 20:36:57.816198  141726 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:36:57.819639  141726 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6955765f44-6wwpm" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:57.868782  141726 pod_ready.go:92] pod "coredns-6955765f44-6wwpm" in "kube-system" namespace has status "Ready":"True"
	I0813 20:36:57.868816  141726 pod_ready.go:81] duration metric: took 49.149872ms waiting for pod "coredns-6955765f44-6wwpm" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:57.868833  141726 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:57.882109  141726 pod_ready.go:92] pod "etcd-test-preload-20210813203431-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:36:57.882132  141726 pod_ready.go:81] duration metric: took 13.289572ms waiting for pod "etcd-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:57.882149  141726 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:58.464018  141726 pod_ready.go:92] pod "kube-apiserver-test-preload-20210813203431-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:36:58.464042  141726 pod_ready.go:81] duration metric: took 581.883956ms waiting for pod "kube-apiserver-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:58.464052  141726 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:59.471287  141726 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210813203431-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:36:59.471311  141726 pod_ready.go:81] duration metric: took 1.007251553s waiting for pod "kube-controller-manager-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:59.471321  141726 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m97cx" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:59.588309  141726 pod_ready.go:92] pod "kube-proxy-m97cx" in "kube-system" namespace has status "Ready":"True"
	I0813 20:36:59.588328  141726 pod_ready.go:81] duration metric: took 116.999972ms waiting for pod "kube-proxy-m97cx" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:59.588337  141726 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:59.988630  141726 pod_ready.go:92] pod "kube-scheduler-test-preload-20210813203431-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:36:59.988649  141726 pod_ready.go:81] duration metric: took 400.305167ms waiting for pod "kube-scheduler-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:36:59.988659  141726 pod_ready.go:38] duration metric: took 2.172452889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:36:59.988679  141726 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:37:00.006680  141726 ops.go:34] apiserver oom_adj: -16
	I0813 20:37:00.006702  141726 kubeadm.go:604] restartCluster took 11.694700494s
	I0813 20:37:00.006710  141726 kubeadm.go:392] StartCluster complete in 11.763314505s
	I0813 20:37:00.006730  141726 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:37:00.006829  141726 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:37:00.007440  141726 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:37:00.008110  141726 kapi.go:59] client config for test-preload-20210813203431-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-202108
13203431-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:37:00.518355  141726 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-20210813203431-13784" rescaled to 1
	I0813 20:37:00.518422  141726 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}
	I0813 20:37:00.518451  141726 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:37:00.520798  141726 out.go:177] * Verifying Kubernetes components...
	I0813 20:37:00.518488  141726 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0813 20:37:00.520853  141726 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:37:00.520873  141726 addons.go:59] Setting storage-provisioner=true in profile "test-preload-20210813203431-13784"
	I0813 20:37:00.518648  141726 config.go:177] Loaded profile config "test-preload-20210813203431-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.17.3
	I0813 20:37:00.520904  141726 addons.go:59] Setting default-storageclass=true in profile "test-preload-20210813203431-13784"
	I0813 20:37:00.520918  141726 addons.go:135] Setting addon storage-provisioner=true in "test-preload-20210813203431-13784"
	W0813 20:37:00.520929  141726 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:37:00.520973  141726 host.go:66] Checking if "test-preload-20210813203431-13784" exists ...
	I0813 20:37:00.520928  141726 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-20210813203431-13784"
	I0813 20:37:00.521319  141726 cli_runner.go:115] Run: docker container inspect test-preload-20210813203431-13784 --format={{.State.Status}}
	I0813 20:37:00.521448  141726 cli_runner.go:115] Run: docker container inspect test-preload-20210813203431-13784 --format={{.State.Status}}
	I0813 20:37:00.570081  141726 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:37:00.570198  141726 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:37:00.570214  141726 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:37:00.570270  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:37:00.570323  141726 kapi.go:59] client config for test-preload-20210813203431-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-20210813203431-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/test-preload-202108
13203431-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:37:00.578994  141726 addons.go:135] Setting addon default-storageclass=true in "test-preload-20210813203431-13784"
	W0813 20:37:00.579022  141726 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:37:00.579055  141726 host.go:66] Checking if "test-preload-20210813203431-13784" exists ...
	I0813 20:37:00.579463  141726 cli_runner.go:115] Run: docker container inspect test-preload-20210813203431-13784 --format={{.State.Status}}
	I0813 20:37:00.593229  141726 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:37:00.593226  141726 node_ready.go:35] waiting up to 6m0s for node "test-preload-20210813203431-13784" to be "Ready" ...
	I0813 20:37:00.595364  141726 node_ready.go:49] node "test-preload-20210813203431-13784" has status "Ready":"True"
	I0813 20:37:00.595392  141726 node_ready.go:38] duration metric: took 2.125825ms waiting for node "test-preload-20210813203431-13784" to be "Ready" ...
	I0813 20:37:00.595403  141726 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:37:00.599126  141726 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6955765f44-6wwpm" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:00.614957  141726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813203431-13784/id_rsa Username:docker}
	I0813 20:37:00.623623  141726 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:37:00.623651  141726 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:37:00.623698  141726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813203431-13784
	I0813 20:37:00.661603  141726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/test-preload-20210813203431-13784/id_rsa Username:docker}
	I0813 20:37:00.710107  141726 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:37:00.758919  141726 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:37:00.789461  141726 pod_ready.go:92] pod "coredns-6955765f44-6wwpm" in "kube-system" namespace has status "Ready":"True"
	I0813 20:37:00.789480  141726 pod_ready.go:81] duration metric: took 190.331885ms waiting for pod "coredns-6955765f44-6wwpm" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:00.789511  141726 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:00.922359  141726 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:37:00.922389  141726 addons.go:344] enableAddons completed in 403.907854ms
	I0813 20:37:01.188562  141726 pod_ready.go:92] pod "etcd-test-preload-20210813203431-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:37:01.188581  141726 pod_ready.go:81] duration metric: took 399.061385ms waiting for pod "etcd-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:01.188593  141726 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:01.588288  141726 pod_ready.go:92] pod "kube-apiserver-test-preload-20210813203431-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:37:01.588309  141726 pod_ready.go:81] duration metric: took 399.706104ms waiting for pod "kube-apiserver-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:01.588320  141726 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:01.988723  141726 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210813203431-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:37:01.988740  141726 pod_ready.go:81] duration metric: took 400.414148ms waiting for pod "kube-controller-manager-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:01.988752  141726 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m97cx" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:02.389148  141726 pod_ready.go:92] pod "kube-proxy-m97cx" in "kube-system" namespace has status "Ready":"True"
	I0813 20:37:02.389168  141726 pod_ready.go:81] duration metric: took 400.410071ms waiting for pod "kube-proxy-m97cx" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:02.389179  141726 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:02.788611  141726 pod_ready.go:92] pod "kube-scheduler-test-preload-20210813203431-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:37:02.788628  141726 pod_ready.go:81] duration metric: took 399.442361ms waiting for pod "kube-scheduler-test-preload-20210813203431-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:37:02.788639  141726 pod_ready.go:38] duration metric: took 2.193217861s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:37:02.788653  141726 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:37:02.788702  141726 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:37:02.808854  141726 api_server.go:70] duration metric: took 2.290398045s to wait for apiserver process to appear ...
	I0813 20:37:02.808885  141726 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:37:02.808895  141726 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:37:02.813748  141726 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:37:02.814443  141726 api_server.go:139] control plane version: v1.17.3
	I0813 20:37:02.814460  141726 api_server.go:129] duration metric: took 5.568735ms to wait for apiserver health ...
	I0813 20:37:02.814471  141726 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:37:02.989657  141726 system_pods.go:59] 8 kube-system pods found
	I0813 20:37:02.989684  141726 system_pods.go:61] "coredns-6955765f44-6wwpm" [e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2] Running
	I0813 20:37:02.989689  141726 system_pods.go:61] "etcd-test-preload-20210813203431-13784" [187b7541-5b64-43c7-b999-fbcee534a8a9] Running
	I0813 20:37:02.989693  141726 system_pods.go:61] "kindnet-js67l" [94f51ba9-9f3b-4616-b09e-629bdb72ae4f] Running
	I0813 20:37:02.989697  141726 system_pods.go:61] "kube-apiserver-test-preload-20210813203431-13784" [94afd4a5-7933-4acf-aac1-574dad24199c] Running
	I0813 20:37:02.989703  141726 system_pods.go:61] "kube-controller-manager-test-preload-20210813203431-13784" [b99a11fa-0100-4cd1-bbb8-1f17ac8ba9af] Running
	I0813 20:37:02.989707  141726 system_pods.go:61] "kube-proxy-m97cx" [b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd] Running
	I0813 20:37:02.989712  141726 system_pods.go:61] "kube-scheduler-test-preload-20210813203431-13784" [ce5d62a3-a1e3-43fc-b562-a9609adac6ff] Running
	I0813 20:37:02.989715  141726 system_pods.go:61] "storage-provisioner" [8c86309a-234d-4821-bc24-9759501af6ef] Running
	I0813 20:37:02.989721  141726 system_pods.go:74] duration metric: took 175.245392ms to wait for pod list to return data ...
	I0813 20:37:02.989731  141726 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:37:03.189246  141726 default_sa.go:45] found service account: "default"
	I0813 20:37:03.189271  141726 default_sa.go:55] duration metric: took 199.534009ms for default service account to be created ...
	I0813 20:37:03.189280  141726 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:37:03.389990  141726 system_pods.go:86] 8 kube-system pods found
	I0813 20:37:03.390016  141726 system_pods.go:89] "coredns-6955765f44-6wwpm" [e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2] Running
	I0813 20:37:03.390021  141726 system_pods.go:89] "etcd-test-preload-20210813203431-13784" [187b7541-5b64-43c7-b999-fbcee534a8a9] Running
	I0813 20:37:03.390025  141726 system_pods.go:89] "kindnet-js67l" [94f51ba9-9f3b-4616-b09e-629bdb72ae4f] Running
	I0813 20:37:03.390029  141726 system_pods.go:89] "kube-apiserver-test-preload-20210813203431-13784" [94afd4a5-7933-4acf-aac1-574dad24199c] Running
	I0813 20:37:03.390034  141726 system_pods.go:89] "kube-controller-manager-test-preload-20210813203431-13784" [b99a11fa-0100-4cd1-bbb8-1f17ac8ba9af] Running
	I0813 20:37:03.390039  141726 system_pods.go:89] "kube-proxy-m97cx" [b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd] Running
	I0813 20:37:03.390043  141726 system_pods.go:89] "kube-scheduler-test-preload-20210813203431-13784" [ce5d62a3-a1e3-43fc-b562-a9609adac6ff] Running
	I0813 20:37:03.390047  141726 system_pods.go:89] "storage-provisioner" [8c86309a-234d-4821-bc24-9759501af6ef] Running
	I0813 20:37:03.390053  141726 system_pods.go:126] duration metric: took 200.767794ms to wait for k8s-apps to be running ...
	I0813 20:37:03.390068  141726 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:37:03.390112  141726 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:37:03.399442  141726 system_svc.go:56] duration metric: took 9.368678ms WaitForService to wait for kubelet.
	I0813 20:37:03.399467  141726 kubeadm.go:547] duration metric: took 2.881014223s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:37:03.399492  141726 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:37:03.589228  141726 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:37:03.589251  141726 node_conditions.go:123] node cpu capacity is 8
	I0813 20:37:03.589263  141726 node_conditions.go:105] duration metric: took 189.765589ms to run NodePressure ...
	I0813 20:37:03.589273  141726 start.go:231] waiting for startup goroutines ...
	I0813 20:37:03.632155  141726 start.go:462] kubectl: 1.20.5, cluster: 1.17.3 (minor skew: 3)
	I0813 20:37:03.634331  141726 out.go:177] 
	W0813 20:37:03.634484  141726 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.17.3.
	I0813 20:37:03.636009  141726 out.go:177]   - Want kubectl v1.17.3? Try 'minikube kubectl -- get pods -A'
	I0813 20:37:03.637543  141726 out.go:177] * Done! kubectl is now configured to use "test-preload-20210813203431-13784" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:34:34 UTC, end at Fri 2021-08-13 20:37:04 UTC. --
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.161129168Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.166394378Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.168724795Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.170914299Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.179217949Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.179239934Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.179253476Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.183703346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.186004237Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.188228272Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.196793806Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.196828557Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.196850990Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.201125992Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.203352297Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.205464351Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.216129504Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.216170359Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.257798524Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.257884798Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.263088192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.265644428Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.268032561Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.276819295Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:36:58 test-preload-20210813203431-13784 crio[4503]: time="2021-08-13 20:36:58.276842708Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID
	18d30dfbbb89d       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                     6 seconds ago        Running             kindnet-cni               1                   cd759c0679189
	510c36c63d4fc       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19                                     7 seconds ago        Running             kube-proxy                1                   d30019c85c0e8
	d0126d8650c2c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     8 seconds ago        Running             storage-provisioner       1                   871becfd2877b
	9847349f390c6       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     8 seconds ago        Running             coredns                   1                   b3b43b5323ab8
	dba00b34a9c2c       d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad                                     13 seconds ago       Running             kube-scheduler            0                   67bfb3c9bb03f
	b6815a4720eb4       b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302                                     13 seconds ago       Running             kube-controller-manager   0                   75d1b751ea19a
	9e3812d80da2a       90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b                                     13 seconds ago       Running             kube-apiserver            0                   1f1bc8246efe0
	908731958dfcc       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                     13 seconds ago       Running             etcd                      1                   0b9f1f6450653
	343739a3a7b46       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     46 seconds ago       Exited              coredns                   0                   b3b43b5323ab8
	e54df6bf18cf4       docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1   About a minute ago   Exited              kindnet-cni               0                   cd759c0679189
	5e534d0c7c0a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     About a minute ago   Exited              storage-provisioner       0                   871becfd2877b
	15a7ba84f832b       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19                                     About a minute ago   Exited              kube-proxy                0                   d30019c85c0e8
	e68139a66c8e4       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                     About a minute ago   Exited              etcd                      0                   0b9f1f6450653
	c45db9c8bca7d       78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28                                     About a minute ago   Exited              kube-scheduler            0                   80333a2c6db52
	f6a591072ea06       0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2                                     About a minute ago   Exited              kube-apiserver            0                   1df641f2b1d13
	c06eaa5683fad       5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056                                     About a minute ago   Exited              kube-controller-manager   0                   844077f5987f2
	
	* 
	* ==> coredns [343739a3a7b46193fff33bee88085bab7bb629feb1a07089ff5c0cb5917fa7f3] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> coredns [9847349f390c61c509b6adcb97f53986f028dab66ac99b16c7ece501f98ae64a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-20210813203431-13784
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-20210813203431-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=test-preload-20210813203431-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_35_41_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:35:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-20210813203431-13784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:36:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:36:56 +0000   Fri, 13 Aug 2021 20:35:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:36:56 +0000   Fri, 13 Aug 2021 20:35:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:36:56 +0000   Fri, 13 Aug 2021 20:35:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:36:56 +0000   Fri, 13 Aug 2021 20:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    test-preload-20210813203431-13784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                9f0bc36c-371c-4faf-9749-c4492cb9961e
	  Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.17.3
	  Kube-Proxy Version:         v1.17.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6955765f44-6wwpm                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     69s
	  kube-system                 etcd-test-preload-20210813203431-13784                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kindnet-js67l                                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      69s
	  kube-system                 kube-apiserver-test-preload-20210813203431-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-controller-manager-test-preload-20210813203431-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-proxy-m97cx                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-test-preload-20210813203431-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                           Message
	  ----    ------                   ----               ----                                           -------
	  Normal  Starting                 84s                kubelet, test-preload-20210813203431-13784     Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s                kubelet, test-preload-20210813203431-13784     Node test-preload-20210813203431-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s                kubelet, test-preload-20210813203431-13784     Node test-preload-20210813203431-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s                kubelet, test-preload-20210813203431-13784     Node test-preload-20210813203431-13784 status is now: NodeHasSufficientPID
	  Normal  NodeReady                74s                kubelet, test-preload-20210813203431-13784     Node test-preload-20210813203431-13784 status is now: NodeReady
	  Normal  Starting                 68s                kube-proxy, test-preload-20210813203431-13784  Starting kube-proxy.
	  Normal  Starting                 14s                kubelet, test-preload-20210813203431-13784     Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)  kubelet, test-preload-20210813203431-13784     Node test-preload-20210813203431-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet, test-preload-20210813203431-13784     Node test-preload-20210813203431-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x8 over 14s)  kubelet, test-preload-20210813203431-13784     Node test-preload-20210813203431-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 6s                 kube-proxy, test-preload-20210813203431-13784  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug13 20:28] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:29] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth1aecd32a
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 96 0a 2d 75 01 05 08 06        ........-u....
	[  +0.639908] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth92c0c5cc
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 52 2c c5 c5 a9 29 08 06        ......R,...)..
	[ +17.613690] cgroup: cgroup2: unknown option "nsdelegate"
	[ +27.494943] cgroup: cgroup2: unknown option "nsdelegate"
	[  +6.231603] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth6c202a5c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce cc 50 31 7d 8b 08 06        ........P1}...
	[Aug13 20:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:31] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth90cc9965
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 72 31 b4 54 33 e2 08 06        ......r1.T3...
	[  +0.035858] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethe2538e48
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff b6 2f e5 2b 50 70 08 06        ......./.+Pp..
	[ +18.561888] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:32] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth09b00039
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 62 87 f5 1f 14 0c 08 06        ......b.......
	[  +2.452020] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:34] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:36] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0e f2 f9 b6 18 47 08 06        ...........G..
	[  +0.000003] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e f2 f9 b6 18 47 08 06        ...........G..
	[ +11.581571] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth727e793e
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 66 0b 2e 8e 16 30 08 06        ......f....0..
	
	* 
	* ==> etcd [908731958dfccb2e8448fa379201fad93f84b4e07ad9b8d590be0b14195a3f31] <==
	* 2021-08-13 20:36:51.494439 I | embed: initial advertise peer URLs = https://192.168.49.2:2380
	2021-08-13 20:36:51.494443 I | embed: initial cluster = 
	2021-08-13 20:36:51.559905 I | etcdserver: restarting member aec36adc501070cc in cluster fa54960ea34d58be at commit index 446
	raft2021/08/13 20:36:51 INFO: aec36adc501070cc switched to configuration voters=()
	raft2021/08/13 20:36:51 INFO: aec36adc501070cc became follower at term 2
	raft2021/08/13 20:36:51 INFO: newRaft aec36adc501070cc [peers: [], term: 2, commit: 446, applied: 0, lastindex: 446, lastterm: 2]
	2021-08-13 20:36:51.565174 W | auth: simple token is not cryptographically signed
	2021-08-13 20:36:51.568396 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2021/08/13 20:36:51 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-13 20:36:51.570285 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-13 20:36:51.570421 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:36:51.570478 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:36:51.571101 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:36:51.571248 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-13 20:36:51.571299 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 20:36:53 INFO: aec36adc501070cc is starting a new election at term 2
	raft2021/08/13 20:36:53 INFO: aec36adc501070cc became candidate at term 3
	raft2021/08/13 20:36:53 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3
	raft2021/08/13 20:36:53 INFO: aec36adc501070cc became leader at term 3
	raft2021/08/13 20:36:53 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3
	2021-08-13 20:36:53.461522 I | etcdserver: published {Name:test-preload-20210813203431-13784 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-13 20:36:53.461545 I | embed: ready to serve client requests
	2021-08-13 20:36:53.461569 I | embed: ready to serve client requests
	2021-08-13 20:36:53.463048 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:36:53.463071 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> etcd [e68139a66c8e491db27343f7d6204542957a973e02e8c81846df97bfa05bfb9a] <==
	* 2021-08-13 20:35:34.468400 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-13 20:35:34.468519 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 20:35:35 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/13 20:35:35 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/13 20:35:35 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/13 20:35:35 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/13 20:35:35 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-13 20:35:35.360531 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:35:35.361210 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:35:35.361268 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:35:35.361315 I | etcdserver: published {Name:test-preload-20210813203431-13784 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-13 20:35:35.361336 I | embed: ready to serve client requests
	2021-08-13 20:35:35.361346 I | embed: ready to serve client requests
	2021-08-13 20:35:35.363519 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:35:35.363627 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-13 20:35:38.135933 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-test-preload-20210813203431-13784\" " with result "range_response_count:0 size:4" took too long (361.294801ms) to execute
	2021-08-13 20:35:38.137620 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (362.783967ms) to execute
	2021-08-13 20:35:38.137924 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (327.033567ms) to execute
	2021-08-13 20:35:42.609614 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/token-cleaner\" " with result "range_response_count:1 size:193" took too long (137.554078ms) to execute
	2021-08-13 20:35:42.609685 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-test-preload-20210813203431-13784\" " with result "range_response_count:1 size:2426" took too long (114.873903ms) to execute
	2021-08-13 20:36:00.311076 W | wal: sync duration of 1.258662122s, expected less than 1s
	2021-08-13 20:36:00.442346 W | etcdserver: request "header:<ID:8128006947575414670 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:250 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128006947575414668 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>" with result "size:16" took too long (131.03649ms) to execute
	2021-08-13 20:36:00.442589 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:763" took too long (1.222395494s) to execute
	2021-08-13 20:36:00.442746 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-6955765f44-6wwpm\" " with result "range_response_count:1 size:1704" took too long (959.489093ms) to execute
	2021-08-13 20:36:00.442887 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (291.932835ms) to execute
	
	* 
	* ==> kernel <==
	*  20:37:05 up  1:19,  0 users,  load average: 1.25, 1.12, 0.92
	Linux test-preload-20210813203431-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [9e3812d80da2ac406023c1c844a117b90ef556d60f39ad8983b710e6e8cabd97] <==
	* I0813 20:36:55.806009       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0813 20:36:55.816837       1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0813 20:36:55.817050       1 autoregister_controller.go:140] Starting autoregister controller
	I0813 20:36:55.817111       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0813 20:36:55.817163       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0813 20:36:55.817178       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	I0813 20:36:55.819287       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:36:55.858013       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0813 20:36:55.858096       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
	I0813 20:36:55.858166       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0813 20:36:55.858216       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0813 20:36:55.957670       1 cache.go:39] Caches are synced for autoregister controller
	I0813 20:36:55.957670       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0813 20:36:55.958013       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 20:36:55.958028       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 20:36:55.958315       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
	I0813 20:36:56.804474       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0813 20:36:56.804503       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:36:56.804512       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:36:56.808343       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
	I0813 20:36:57.582000       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0813 20:36:57.688162       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0813 20:36:57.708331       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0813 20:36:57.801847       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:36:57.807796       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [f6a591072ea0692bc3ab7891c57bf51e160cb77bdb7f647ab8626d590834355e] <==
	* W0813 20:36:49.091874       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091878       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091918       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091922       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091957       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091958       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091963       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091972       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091958       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091989       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.091997       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092000       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092014       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092021       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092026       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092031       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092037       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092054       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092149       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092063       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092129       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092196       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092197       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092224       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0813 20:36:49.092396       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [b6815a4720eb4238ad15ba96da4b23927d68af7f92c6e32cb4fd89a36814e7b3] <==
	* I0813 20:36:58.500181       1 attach_detach_controller.go:342] Starting attach detach controller
	I0813 20:36:58.500200       1 shared_informer.go:197] Waiting for caches to sync for attach detach
	I0813 20:36:58.504257       1 controllermanager.go:533] Started "pvc-protection"
	I0813 20:36:58.504336       1 pvc_protection_controller.go:100] Starting PVC protection controller
	I0813 20:36:58.504350       1 shared_informer.go:197] Waiting for caches to sync for PVC protection
	I0813 20:36:58.514728       1 controllermanager.go:533] Started "namespace"
	I0813 20:36:58.514805       1 namespace_controller.go:200] Starting namespace controller
	I0813 20:36:58.514820       1 shared_informer.go:197] Waiting for caches to sync for namespace
	I0813 20:36:59.027325       1 garbagecollector.go:129] Starting garbage collector controller
	I0813 20:36:59.027351       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
	I0813 20:36:59.027369       1 graph_builder.go:282] GraphBuilder running
	I0813 20:36:59.027488       1 controllermanager.go:533] Started "garbagecollector"
	I0813 20:36:59.032410       1 controllermanager.go:533] Started "daemonset"
	I0813 20:36:59.032515       1 daemon_controller.go:255] Starting daemon sets controller
	I0813 20:36:59.032530       1 shared_informer.go:197] Waiting for caches to sync for daemon sets
	I0813 20:36:59.036887       1 controllermanager.go:533] Started "csrapproving"
	I0813 20:36:59.037007       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0813 20:36:59.037019       1 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving
	I0813 20:36:59.042736       1 controllermanager.go:533] Started "bootstrapsigner"
	W0813 20:36:59.042752       1 controllermanager.go:525] Skipping "endpointslice"
	I0813 20:36:59.042818       1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer
	I0813 20:36:59.261846       1 controllermanager.go:533] Started "disruption"
	I0813 20:36:59.261918       1 disruption.go:330] Starting disruption controller
	I0813 20:36:59.261926       1 shared_informer.go:197] Waiting for caches to sync for disruption
	I0813 20:36:59.412236       1 node_ipam_controller.go:94] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [c06eaa5683fad0a6487fa3530ca3205df2b9c83f3b86a9e4d3bb897519c7e1d3] <==
	* E0813 20:36:49.423065       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://control-plane.minikube.internal:8443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=1&timeout=5m46s&timeoutSeconds=346&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423131       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodSecurityPolicy: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/podsecuritypolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=8m55s&timeoutSeconds=535&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423137       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=348&timeout=7m42s&timeoutSeconds=462&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423132       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=379&timeout=5m19s&timeoutSeconds=319&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423147       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=416&timeout=5m45s&timeoutSeconds=345&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423168       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423177       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=8m15s&timeoutSeconds=495&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423180       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423178       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://control-plane.minikube.internal:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423189       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423208       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=352&timeout=5m8s&timeoutSeconds=308&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423211       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=354&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423217       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=392&timeout=9m31s&timeoutSeconds=571&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423222       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=400&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423215       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://control-plane.minikube.internal:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=1&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423232       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://control-plane.minikube.internal:8443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=1&timeout=9m55s&timeoutSeconds=595&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423242       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://control-plane.minikube.internal:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=44&timeout=6m31s&timeoutSeconds=391&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423251       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.VolumeAttachment: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=1&timeout=8m15s&timeoutSeconds=495&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423260       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://control-plane.minikube.internal:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=146&timeout=8m5s&timeoutSeconds=485&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423263       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423270       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=28&timeout=6m39s&timeoutSeconds=399&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423416       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=8m50s&timeoutSeconds=530&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423427       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://control-plane.minikube.internal:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=330&timeout=5m18s&timeoutSeconds=318&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423443       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=314&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 20:36:49.423485       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m13s&timeoutSeconds=553&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [15a7ba84f832bc3082fe6784efc6fdad71d0201f35f1f1592eed54fcd02c396f] <==
	* W0813 20:35:56.723136       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0813 20:35:56.728864       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I0813 20:35:56.728890       1 server_others.go:145] Using iptables Proxier.
	I0813 20:35:56.729099       1 server.go:571] Version: v1.17.0
	I0813 20:35:56.729593       1 config.go:313] Starting service config controller
	I0813 20:35:56.729593       1 config.go:131] Starting endpoints config controller
	I0813 20:35:56.729618       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0813 20:35:56.729618       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0813 20:35:56.829775       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0813 20:35:56.829794       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [510c36c63d4fc456e1bbdfcbde5e48d3d9bca364b5cd549b7a601974d14cea24] <==
	* W0813 20:36:58.007105       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0813 20:36:58.012986       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I0813 20:36:58.013015       1 server_others.go:145] Using iptables Proxier.
	I0813 20:36:58.013187       1 server.go:571] Version: v1.17.0
	I0813 20:36:58.013617       1 config.go:131] Starting endpoints config controller
	I0813 20:36:58.013644       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0813 20:36:58.013702       1 config.go:313] Starting service config controller
	I0813 20:36:58.013740       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0813 20:36:58.113901       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0813 20:36:58.113936       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c45db9c8bca7d11924aa956caed8a1f35dc1bed54998ca9460aa5cd57617458b] <==
	* E0813 20:35:37.779723       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:35:37.780111       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:35:37.780250       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:35:37.780296       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:35:37.780370       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:35:37.780381       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:35:37.780455       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:35:37.780477       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:35:37.780503       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:35:37.780562       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:35:37.780581       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:35:37.780613       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:35:38.780751       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:35:38.781744       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:35:38.782784       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:35:38.783993       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:35:38.785053       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:35:38.786137       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:35:38.787174       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:35:38.788135       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:35:38.789337       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:35:38.790347       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:35:38.791669       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:35:38.792627       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 20:35:39.878395       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [dba00b34a9c2cafbfd9e836b7f8d4a045b390ff1265bdeea4e868fe1fc92578c] <==
	* I0813 20:36:52.363576       1 serving.go:312] Generated self-signed cert in-memory
	W0813 20:36:52.844030       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0813 20:36:52.844078       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0813 20:36:55.886366       1 authorization.go:47] Authorization is disabled
	W0813 20:36:55.886385       1 authentication.go:92] Authentication is disabled
	I0813 20:36:55.886394       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0813 20:36:55.887403       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0813 20:36:55.887434       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0813 20:36:55.887465       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:36:55.887487       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:36:55.887709       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0813 20:36:55.887736       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	I0813 20:36:55.987654       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0813 20:36:55.987665       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:34:34 UTC, end at Fri 2021-08-13 20:37:05 UTC. --
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.966912    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-bj2kw" (UniqueName: "kubernetes.io/secret/8c86309a-234d-4821-bc24-9759501af6ef-storage-provisioner-token-bj2kw") pod "storage-provisioner" (UID: "8c86309a-234d-4821-bc24-9759501af6ef")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967033    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/8c86309a-234d-4821-bc24-9759501af6ef-tmp") pod "storage-provisioner" (UID: "8c86309a-234d-4821-bc24-9759501af6ef")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967076    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/94f51ba9-9f3b-4616-b09e-629bdb72ae4f-cni-cfg") pod "kindnet-js67l" (UID: "94f51ba9-9f3b-4616-b09e-629bdb72ae4f")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967098    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/94f51ba9-9f3b-4616-b09e-629bdb72ae4f-xtables-lock") pod "kindnet-js67l" (UID: "94f51ba9-9f3b-4616-b09e-629bdb72ae4f")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967127    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-sfr2g" (UniqueName: "kubernetes.io/secret/94f51ba9-9f3b-4616-b09e-629bdb72ae4f-kindnet-token-sfr2g") pod "kindnet-js67l" (UID: "94f51ba9-9f3b-4616-b09e-629bdb72ae4f")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967159    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-4nr8m" (UniqueName: "kubernetes.io/secret/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd-kube-proxy-token-4nr8m") pod "kube-proxy-m97cx" (UID: "b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967201    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd-kube-proxy") pod "kube-proxy-m97cx" (UID: "b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967231    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd-xtables-lock") pod "kube-proxy-m97cx" (UID: "b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967256    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd-lib-modules") pod "kube-proxy-m97cx" (UID: "b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967336    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/94f51ba9-9f3b-4616-b09e-629bdb72ae4f-lib-modules") pod "kindnet-js67l" (UID: "94f51ba9-9f3b-4616-b09e-629bdb72ae4f")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967394    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2-config-volume") pod "coredns-6955765f44-6wwpm" (UID: "e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967441    6473 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-jf4wt" (UniqueName: "kubernetes.io/secret/e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2-coredns-token-jf4wt") pod "coredns-6955765f44-6wwpm" (UID: "e4c59dd3-9354-4884-a0fe-c3e2a1ff67e2")
	Aug 13 20:36:55 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:55.967481    6473 reconciler.go:156] Reconciler: start to sync state
	Aug 13 20:36:56 test-preload-20210813203431-13784 kubelet[6473]: W0813 20:36:56.020665    6473 status_manager.go:546] Failed to update status for pod "kube-scheduler-test-preload-20210813203431-13784_kube-system(23947c5e-d9f0-4467-b5be-045943a0b32f)": failed to patch status "{\"metadata\":{\"uid\":\"23947c5e-d9f0-4467-b5be-045943a0b32f\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2021-08-13T20:36:50Z\",\"type\":\"Initialized\"},{\"lastTransitionTime\":\"2021-08-13T20:36:50Z\",\"message\":\"containers with unready status: [kube-scheduler]\",\"reason\":\"ContainersNotReady\",\"status\":\"False\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2021-08-13T20:36:50Z\",\"message\":\"containers with unready status: [kube-scheduler]\",\"reason\":\"ContainersNotReady\",\"status\":\"False\",\"type\":\"ContainersReady\"},{\"lastTransitionTime\":\"2021-08-13T20:36
:50Z\",\"type\":\"PodScheduled\"}],\"containerStatuses\":[{\"image\":\"k8s.gcr.io/kube-scheduler:v1.17.3\",\"imageID\":\"\",\"lastState\":{},\"name\":\"kube-scheduler\",\"ready\":false,\"restartCount\":0,\"started\":false,\"state\":{\"waiting\":{\"reason\":\"ContainerCreating\"}}}],\"phase\":\"Pending\",\"podIPs\":null,\"startTime\":\"2021-08-13T20:36:50Z\"}}" for pod "kube-system"/"kube-scheduler-test-preload-20210813203431-13784": pods "kube-scheduler-test-preload-20210813203431-13784" not found
	Aug 13 20:36:56 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:56.820314    6473 kubelet_node_status.go:112] Node test-preload-20210813203431-13784 was previously registered
	Aug 13 20:36:56 test-preload-20210813203431-13784 kubelet[6473]: I0813 20:36:56.820424    6473 kubelet_node_status.go:73] Successfully registered node test-preload-20210813203431-13784
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: E0813 20:36:57.068768    6473 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: E0813 20:36:57.068773    6473 secret.go:195] Couldn't get secret kube-system/kube-proxy-token-4nr8m: failed to sync secret cache: timed out waiting for the condition
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: E0813 20:36:57.068898    6473 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd-kube-proxy\" (\"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd\")" failed. No retries permitted until 2021-08-13 20:36:57.568859728 +0000 UTC m=+7.149103189 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd-kube-proxy\") pod \"kube-proxy-m97cx\" (UID: \"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: E0813 20:36:57.068910    6473 secret.go:195] Couldn't get secret kube-system/kindnet-token-sfr2g: failed to sync secret cache: timed out waiting for the condition
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: E0813 20:36:57.068958    6473 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd-kube-proxy-token-4nr8m\" (\"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd\")" failed. No retries permitted until 2021-08-13 20:36:57.568934433 +0000 UTC m=+7.149177902 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy-token-4nr8m\" (UniqueName: \"kubernetes.io/secret/b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd-kube-proxy-token-4nr8m\") pod \"kube-proxy-m97cx\" (UID: \"b9d5cbf3-ffa0-44bc-b076-3d58a88de7bd\") : failed to sync secret cache: timed out waiting for the condition"
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: E0813 20:36:57.068980    6473 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/94f51ba9-9f3b-4616-b09e-629bdb72ae4f-kindnet-token-sfr2g\" (\"94f51ba9-9f3b-4616-b09e-629bdb72ae4f\")" failed. No retries permitted until 2021-08-13 20:36:57.568966461 +0000 UTC m=+7.149209903 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kindnet-token-sfr2g\" (UniqueName: \"kubernetes.io/secret/94f51ba9-9f3b-4616-b09e-629bdb72ae4f-kindnet-token-sfr2g\") pod \"kindnet-js67l\" (UID: \"94f51ba9-9f3b-4616-b09e-629bdb72ae4f\") : failed to sync secret cache: timed out waiting for the condition"
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: W0813 20:36:57.591971    6473 pod_container_deletor.go:75] Container "844077f5987f2dd3c0245bccd41420cd311f68448f661f0cd975c348329b4d02" not found in pod's containers
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: W0813 20:36:57.593000    6473 pod_container_deletor.go:75] Container "1df641f2b1d131863fa89d9cc9045d8e54665842d60d72fffeaa0f9aef3dc6e6" not found in pod's containers
	Aug 13 20:36:57 test-preload-20210813203431-13784 kubelet[6473]: W0813 20:36:57.594132    6473 pod_container_deletor.go:75] Container "80333a2c6db524c8596aece9ecf8a1bcae88f3eb1201d829e6b92cdab0ea2a57" not found in pod's containers
	
	* 
	* ==> storage-provisioner [5e534d0c7c0a3e258d75fb82560f3aafaa174237a182701b5659877c943d2e59] <==
	* I0813 20:35:57.202081       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:35:57.209411       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:35:57.209449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:35:57.213787       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:35:57.213925       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-20210813203431-13784_621abaca-0846-4e5f-8726-f0829cbe065a!
	I0813 20:35:57.214572       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab2dbdd4-449a-43ea-ae09-e1847cd63ad0", APIVersion:"v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-20210813203431-13784_621abaca-0846-4e5f-8726-f0829cbe065a became leader
	I0813 20:35:57.314541       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-20210813203431-13784_621abaca-0846-4e5f-8726-f0829cbe065a!
	
	* 
	* ==> storage-provisioner [d0126d8650c2caf24a1c8d89c08c98259c49998caf85429afd31308fc06fc84e] <==
	* I0813 20:36:56.341603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:36:56.348396       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:36:56.348435       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-20210813203431-13784 -n test-preload-20210813203431-13784
helpers_test.go:262: (dbg) Run:  kubectl --context test-preload-20210813203431-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context test-preload-20210813203431-13784 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context test-preload-20210813203431-13784 describe pod : exit status 1 (47.983979ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context test-preload-20210813203431-13784 describe pod : exit status 1
helpers_test.go:176: Cleaning up "test-preload-20210813203431-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210813203431-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210813203431-13784: (4.152342904s)
--- FAIL: TestPreload (158.11s)

                                                
                                    
x
+
TestScheduledStopUnix (83.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210813203710-13784 --memory=2048 --driver=docker  --container-runtime=crio
E0813 20:37:16.206973   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210813203710-13784 --memory=2048 --driver=docker  --container-runtime=crio: (37.458004371s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813203710-13784 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210813203710-13784 -n scheduled-stop-20210813203710-13784
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813203710-13784 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813203710-13784 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813203710-13784 -n scheduled-stop-20210813203710-13784
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813203710-13784
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813203710-13784 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813203710-13784
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210813203710-13784: exit status 3 (2.045844285s)

                                                
                                                
-- stdout --
	scheduled-stop-20210813203710-13784
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:38:27.870194  150792 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0813 20:38:27.870247  150792 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210813203710-13784
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:38:27.870194  150792 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0813 20:38:27.870247  150792 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-13 20:38:27.872341717 +0000 UTC m=+1837.793779078
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210813203710-13784
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210813203710-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "caf545755ef75296cbda8c3b63293ecce609a996f3aef55c3685718ee772d94d",
	        "Created": "2021-08-13T20:37:11.556034033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:37:11.998479653Z",
	            "FinishedAt": "2021-08-13T20:38:25.929706758Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/caf545755ef75296cbda8c3b63293ecce609a996f3aef55c3685718ee772d94d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/caf545755ef75296cbda8c3b63293ecce609a996f3aef55c3685718ee772d94d/hostname",
	        "HostsPath": "/var/lib/docker/containers/caf545755ef75296cbda8c3b63293ecce609a996f3aef55c3685718ee772d94d/hosts",
	        "LogPath": "/var/lib/docker/containers/caf545755ef75296cbda8c3b63293ecce609a996f3aef55c3685718ee772d94d/caf545755ef75296cbda8c3b63293ecce609a996f3aef55c3685718ee772d94d-json.log",
	        "Name": "/scheduled-stop-20210813203710-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210813203710-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210813203710-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6adc39d810d68efc3bbb74c77ab98456a518d15f9d82e8e1cc889b95dc48d591-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6adc39d810d68efc3bbb74c77ab98456a518d15f9d82e8e1cc889b95dc48d591/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6adc39d810d68efc3bbb74c77ab98456a518d15f9d82e8e1cc889b95dc48d591/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6adc39d810d68efc3bbb74c77ab98456a518d15f9d82e8e1cc889b95dc48d591/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210813203710-13784",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210813203710-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210813203710-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210813203710-13784",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210813203710-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "78c3364bc1a7a6f0505a109795e0cba60d9ab87e93d3195c18d72a7b69ba7228",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/78c3364bc1a7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210813203710-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "caf545755ef7"
	                    ],
	                    "NetworkID": "ecaf12a1bdb52b4258b8915bd6fdd2070f671c23b7f30242298984dd5c18a3f0",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813203710-13784 -n scheduled-stop-20210813203710-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813203710-13784 -n scheduled-stop-20210813203710-13784: exit status 7 (87.792891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210813203710-13784" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210813203710-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210813203710-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210813203710-13784: (5.379209038s)
--- FAIL: TestScheduledStopUnix (83.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (144.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.810627664.exe start -p running-upgrade-20210813204143-13784 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.810627664.exe start -p running-upgrade-20210813204143-13784 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m21.338674738s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210813204143-13784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:138: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-20210813204143-13784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (59.413312971s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210813204143-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node running-upgrade-20210813204143-13784 in cluster running-upgrade-20210813204143-13784
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20210813204143-13784" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:43:05.204986  216387 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:43:05.205470  216387 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:43:05.205482  216387 out.go:311] Setting ErrFile to fd 2...
	I0813 20:43:05.205536  216387 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:43:05.205793  216387 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:43:05.206152  216387 out.go:305] Setting JSON to false
	I0813 20:43:05.249930  216387 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5148,"bootTime":1628882237,"procs":239,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:43:05.250041  216387 start.go:121] virtualization: kvm guest
	I0813 20:43:05.252260  216387 out.go:177] * [running-upgrade-20210813204143-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:43:05.253761  216387 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:43:05.252394  216387 notify.go:169] Checking for updates...
	I0813 20:43:05.255359  216387 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:43:05.256855  216387 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:43:05.258514  216387 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:43:05.259003  216387 config.go:177] Loaded profile config "running-upgrade-20210813204143-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0813 20:43:05.259030  216387 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:43:05.260985  216387 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 20:43:05.261029  216387 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:43:05.315224  216387 docker.go:132] docker version: linux-19.03.15
	I0813 20:43:05.315339  216387 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:43:05.405844  216387 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:68 SystemTime:2021-08-13 20:43:05.356354512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:43:05.405948  216387 docker.go:244] overlay module found
	I0813 20:43:05.407867  216387 out.go:177] * Using the docker driver based on existing profile
	I0813 20:43:05.407894  216387 start.go:278] selected driver: docker
	I0813 20:43:05.407900  216387 start.go:751] validating driver "docker" against &{Name:running-upgrade-20210813204143-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210813204143-13784 Namespace: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:43:05.408008  216387 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:43:05.408047  216387 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:43:05.408063  216387 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 20:43:05.409222  216387 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:43:05.410048  216387 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:43:05.495140  216387 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:68 SystemTime:2021-08-13 20:43:05.447156008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:43:05.495292  216387 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:43:05.495325  216387 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 20:43:05.497541  216387 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:43:05.497635  216387 cni.go:93] Creating CNI manager for ""
	I0813 20:43:05.497652  216387 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 20:43:05.497664  216387 start_flags.go:277] config:
	{Name:running-upgrade-20210813204143-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210813204143-13784 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:43:05.500992  216387 out.go:177] * Starting control plane node running-upgrade-20210813204143-13784 in cluster running-upgrade-20210813204143-13784
	I0813 20:43:05.501041  216387 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:43:05.502544  216387 out.go:177] * Pulling base image ...
	I0813 20:43:05.502586  216387 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0813 20:43:05.502660  216387 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:43:05.616276  216387 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:43:05.616313  216387 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	W0813 20:43:05.773267  216387 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0813 20:43:05.773483  216387 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813204143-13784/config.json ...
	I0813 20:43:05.773618  216387 cache.go:108] acquiring lock: {Name:mkba69b0e6f833bbc3169832b699a2072359fe89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773632  216387 cache.go:108] acquiring lock: {Name:mk8de78ef83f94d848b402a4790406f2744f8a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773621  216387 cache.go:108] acquiring lock: {Name:mkb37ad652311ea74582253088e2998e28960ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773764  216387 cache.go:108] acquiring lock: {Name:mkb7a1f68bff3ac15aa63313333156cb053d897e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773775  216387 cache.go:108] acquiring lock: {Name:mkaacd7607e2526208a4e774ed0834b86580f6d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773805  216387 cache.go:108] acquiring lock: {Name:mkf49ffbb332301f49bb0b6961b0fb2c9c638317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773816  216387 cache.go:108] acquiring lock: {Name:mk79dcdbc91acd3fc64f2555bafb2eeea50bc520 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773840  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:43:05.773851  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:43:05.773861  216387 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 253.806µs
	I0813 20:43:05.773876  216387 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:43:05.773870  216387 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.52µs
	I0813 20:43:05.773885  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0813 20:43:05.773893  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:43:05.773916  216387 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 101.718µs
	I0813 20:43:05.773912  216387 cache.go:108] acquiring lock: {Name:mkae1ad5a856cae1b322db6f8ea3f3d6badfb5fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773924  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0813 20:43:05.773931  216387 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0813 20:43:05.773892  216387 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:43:05.773779  216387 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:43:05.773947  216387 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 324.05µs
	I0813 20:43:05.773960  216387 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0813 20:43:05.773969  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0813 20:43:05.773957  216387 cache.go:108] acquiring lock: {Name:mkaafb553f1f54e424d391e5c7ae9df3395e5d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773978  216387 start.go:313] acquiring machines lock for running-upgrade-20210813204143-13784: {Name:mk0b060cf41519434676dcce1f5a5486ef5c6291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.773989  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 20:43:05.773985  216387 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 75.792µs
	I0813 20:43:05.773922  216387 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 120.508µs
	I0813 20:43:05.773997  216387 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0813 20:43:05.774001  216387 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 241.878µs
	I0813 20:43:05.774016  216387 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 20:43:05.774019  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0813 20:43:05.774038  216387 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 85.017µs
	I0813 20:43:05.774051  216387 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0813 20:43:05.774002  216387 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:43:05.773904  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0813 20:43:05.774053  216387 start.go:317] acquired machines lock for "running-upgrade-20210813204143-13784" in 58.042µs
	I0813 20:43:05.774069  216387 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 460.69µs
	I0813 20:43:05.774083  216387 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0813 20:43:05.774075  216387 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:43:05.774094  216387 fix.go:55] fixHost starting: m01
	I0813 20:43:05.773801  216387 cache.go:108] acquiring lock: {Name:mk4e9714a804de474fdc1c995487a08b3b2bd64e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:43:05.774335  216387 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0813 20:43:05.774361  216387 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 593.763µs
	I0813 20:43:05.774382  216387 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0813 20:43:05.774385  216387 cli_runner.go:115] Run: docker container inspect running-upgrade-20210813204143-13784 --format={{.State.Status}}
	I0813 20:43:05.774390  216387 cache.go:88] Successfully saved all images to host disk.
	I0813 20:43:05.818366  216387 fix.go:108] recreateIfNeeded on running-upgrade-20210813204143-13784: state=Running err=<nil>
	W0813 20:43:05.818402  216387 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:43:06.012853  216387 out.go:177] * Updating the running docker "running-upgrade-20210813204143-13784" container ...
	I0813 20:43:06.012909  216387 machine.go:88] provisioning docker machine ...
	I0813 20:43:06.012937  216387 ubuntu.go:169] provisioning hostname "running-upgrade-20210813204143-13784"
	I0813 20:43:06.013026  216387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813204143-13784
	I0813 20:43:06.059550  216387 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:06.059791  216387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32917 <nil> <nil>}
	I0813 20:43:06.059814  216387 main.go:130] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20210813204143-13784 && echo "running-upgrade-20210813204143-13784" | sudo tee /etc/hostname
	I0813 20:43:06.178433  216387 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20210813204143-13784
	
	I0813 20:43:06.178519  216387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813204143-13784
	I0813 20:43:06.225758  216387 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:06.225954  216387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32917 <nil> <nil>}
	I0813 20:43:06.225987  216387 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20210813204143-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20210813204143-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20210813204143-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:43:06.333277  216387 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:43:06.333307  216387 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:43:06.333356  216387 ubuntu.go:177] setting up certificates
	I0813 20:43:06.333374  216387 provision.go:83] configureAuth start
	I0813 20:43:06.333456  216387 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210813204143-13784
	I0813 20:43:06.375692  216387 provision.go:138] copyHostCerts
	I0813 20:43:06.375782  216387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:43:06.375795  216387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:43:06.375838  216387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:43:06.375916  216387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:43:06.375926  216387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:43:06.375946  216387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:43:06.375995  216387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:43:06.376005  216387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:43:06.376035  216387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:43:06.376092  216387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20210813204143-13784 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20210813204143-13784]
	I0813 20:43:06.593005  216387 provision.go:172] copyRemoteCerts
	I0813 20:43:06.593071  216387 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:43:06.593111  216387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813204143-13784
	I0813 20:43:06.636835  216387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813204143-13784/id_rsa Username:docker}
	I0813 20:43:06.717780  216387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:43:06.737061  216387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0813 20:43:06.753999  216387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:43:06.770351  216387 provision.go:86] duration metric: configureAuth took 436.961456ms
	I0813 20:43:06.770377  216387 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:43:06.770567  216387 config.go:177] Loaded profile config "running-upgrade-20210813204143-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0813 20:43:06.770705  216387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813204143-13784
	I0813 20:43:06.812894  216387 main.go:130] libmachine: Using SSH client type: native
	I0813 20:43:06.813074  216387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32917 <nil> <nil>}
	I0813 20:43:06.813095  216387 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:43:07.879013  216387 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:43:07.879046  216387 machine.go:91] provisioned docker machine in 1.866128437s
	I0813 20:43:07.879059  216387 start.go:267] post-start starting for "running-upgrade-20210813204143-13784" (driver="docker")
	I0813 20:43:07.879067  216387 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:43:07.879146  216387 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:43:07.879195  216387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813204143-13784
	I0813 20:43:07.920590  216387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813204143-13784/id_rsa Username:docker}
	I0813 20:43:08.004676  216387 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:43:08.007367  216387 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:43:08.007394  216387 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:43:08.007407  216387 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:43:08.007417  216387 info.go:137] Remote host: Ubuntu 19.10
	I0813 20:43:08.007431  216387 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:43:08.007480  216387 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:43:08.007572  216387 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:43:08.007712  216387 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:43:08.013765  216387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:43:08.030202  216387 start.go:270] post-start completed in 151.126081ms
	I0813 20:43:08.030282  216387 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:43:08.030338  216387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813204143-13784
	I0813 20:43:08.074724  216387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813204143-13784/id_rsa Username:docker}
	I0813 20:43:08.151166  216387 fix.go:57] fixHost completed within 2.37706844s
	I0813 20:43:08.151189  216387 start.go:80] releasing machines lock for "running-upgrade-20210813204143-13784", held for 2.377124041s
	I0813 20:43:08.151272  216387 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210813204143-13784
	I0813 20:43:08.195290  216387 ssh_runner.go:149] Run: systemctl --version
	I0813 20:43:08.195345  216387 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:43:08.195423  216387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813204143-13784
	I0813 20:43:08.195349  216387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813204143-13784
	I0813 20:43:08.238392  216387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813204143-13784/id_rsa Username:docker}
	I0813 20:43:08.244160  216387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813204143-13784/id_rsa Username:docker}
	I0813 20:43:08.456958  216387 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:43:08.475876  216387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:43:08.484611  216387 docker.go:153] disabling docker service ...
	I0813 20:43:08.484658  216387 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:43:08.493387  216387 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:43:08.504454  216387 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:43:08.560610  216387 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:43:08.617214  216387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:43:08.625796  216387 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:43:08.636894  216387 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0813 20:43:08.643992  216387 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:43:08.649899  216387 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:43:08.649947  216387 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:43:08.656353  216387 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:43:08.662499  216387 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:43:08.723832  216387 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:43:08.800646  216387 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:43:08.800715  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:08.804038  216387 retry.go:31] will retry after 1.104660288s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:43:09.909531  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:09.912962  216387 retry.go:31] will retry after 2.160763633s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:43:12.074937  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:12.078448  216387 retry.go:31] will retry after 2.62026012s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:43:14.699806  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:14.703424  216387 retry.go:31] will retry after 3.164785382s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:43:17.869605  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:17.873014  216387 retry.go:31] will retry after 4.680977329s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:43:22.554965  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:22.568527  216387 retry.go:31] will retry after 9.01243771s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:43:31.581604  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:31.585204  216387 retry.go:31] will retry after 6.442959172s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:43:38.029594  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:38.033697  216387 retry.go:31] will retry after 11.217246954s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:43:49.253621  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:43:49.257039  216387 retry.go:31] will retry after 15.299675834s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:44:04.558662  216387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:44:04.564197  216387 out.go:177] 
	W0813 20:44:04.564331  216387 out.go:242] X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	W0813 20:44:04.564345  216387 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:44:04.566440  216387 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                          │
	│                                                                                                                                                        │
	│    * Please attach the following file to the GitHub issue:                                                                                             │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                          │
	│                                                                                                                                                        │
	│    * Please attach the following file to the GitHub issue:                                                                                             │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 20:44:04.568074  216387 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:140: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-20210813204143-13784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-08-13 20:44:04.584231336 +0000 UTC m=+2174.505668694
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20210813204143-13784
helpers_test.go:236: (dbg) docker inspect running-upgrade-20210813204143-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d2cd4ae2bfab953262635a6a1d42521f4328382c645520ead0a74d79ed7198df",
	        "Created": "2021-08-13T20:41:44.470556774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 199411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:41:44.895545109Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/d2cd4ae2bfab953262635a6a1d42521f4328382c645520ead0a74d79ed7198df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d2cd4ae2bfab953262635a6a1d42521f4328382c645520ead0a74d79ed7198df/hostname",
	        "HostsPath": "/var/lib/docker/containers/d2cd4ae2bfab953262635a6a1d42521f4328382c645520ead0a74d79ed7198df/hosts",
	        "LogPath": "/var/lib/docker/containers/d2cd4ae2bfab953262635a6a1d42521f4328382c645520ead0a74d79ed7198df/d2cd4ae2bfab953262635a6a1d42521f4328382c645520ead0a74d79ed7198df-json.log",
	        "Name": "/running-upgrade-20210813204143-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20210813204143-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fc639bf51555ebf914d8b434b2dfe45c239147f635e5463a0dc2e4e8013ed612-init/diff:/var/lib/docker/overlay2/de6af85d43ab6de82a80599c78c852ce945860493e987ae8d4747813e3e12e71/diff:/var/lib/docker/overlay2/1463f2b27e2cf184f9e8a7e127a3f6ecaa9eb4e8c586d13eb98ef0034f418eca/diff:/var/lib/docker/overlay2/6fae380631f93f264fc69450c6bd514661e47e2e598e586796b4ef5487d2609b/diff:/var/lib/docker/overlay2/9455405085a27b776dbc930a9422413a8738ee14a396dba1428ad3477dd78d19/diff:/var/lib/docker/overlay2/872cbd16ad0ea1d1a8643af87081f3ffd14a4cc7bb05e0117ff9630a1e4c2d63/diff:/var/lib/docker/overlay2/1cfe85b8b9110dde1cfd7cd18efd634d01d4c6b46da62d17a26da23aa02686be/diff:/var/lib/docker/overlay2/189b625246c097ae32fa419f11770e2e28b30b39afd65b82dc25c55530584d10/diff:/var/lib/docker/overlay2/f5b5179d9c5187ae940c59c3a026ef190561c0532770dbd761fecfc6251ebc05/diff:/var/lib/docker/overlay2/116a802d8be0890169902c8fcb2ad1b64b5391fa1a060c1f02d344668cf1e40f/diff:/var/lib/docker/overlay2/d335f4
f8874ac51d7120bb297af4bf45b5ab1c41f3977cabfa2149948695c6e9/diff:/var/lib/docker/overlay2/cfc70be91e8c4eaba2033239d05c70abdaaae7922eebe0a9694302cde2259694/diff:/var/lib/docker/overlay2/901fced2d4ec35a47265e02248dd5ae2f3130431109d25e604d2ab568d1bde04/diff:/var/lib/docker/overlay2/7aa7e86939390a956567b669d4bab83fb60927bb30f5a9803342e0d68bd3e23f/diff:/var/lib/docker/overlay2/a482a71267c1aded8aadff398336811f3437dec13bdea6065ac47ad1eb5eed5f/diff:/var/lib/docker/overlay2/972f22e2510a2c07193729807506aedac3ec49bb2063b2b7c3e443b7380a91c5/diff:/var/lib/docker/overlay2/8c845952b97a856c0093d30bbe000f51feda3cb8d3a525e83d8633d5af175938/diff:/var/lib/docker/overlay2/85f0f897ba04db0a863dd2628b8b2e7d3539cecbb6acd1530907b350763c6550/diff:/var/lib/docker/overlay2/f4060f75e85c12bf3ba15020ed3c17665bed2409afc88787b2341c6d5af01040/diff:/var/lib/docker/overlay2/7fa8f93d5ee1866f01fa7288d688713da7f1044a1942eb59534b94cb95cc3d74/diff:/var/lib/docker/overlay2/0d91418cf4c9ce3175fcb432fd443e696caae83859f6d5e10cdfaf102243e189/diff:/var/lib/d
ocker/overlay2/f4f812cd2dd5b0b125eea4bff29d3ed0d34fa877c492159a8b8b6aee1f536d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc639bf51555ebf914d8b434b2dfe45c239147f635e5463a0dc2e4e8013ed612/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc639bf51555ebf914d8b434b2dfe45c239147f635e5463a0dc2e4e8013ed612/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc639bf51555ebf914d8b434b2dfe45c239147f635e5463a0dc2e4e8013ed612/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20210813204143-13784",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20210813204143-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20210813204143-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20210813204143-13784",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20210813204143-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "785aa4608712660c58c3fe1c6e90cc3d1d2ee65c230b31ada99f7927217b828e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32917"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32916"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32915"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/785aa4608712",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "119e616aba7a93a34d1422c5406d279a04565e481c27adfc15e57756a60108d1",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "36469bd726c69f7f4e80095ea481a26b8d464ff05a26c4950694cdc1005196f7",
	                    "EndpointID": "119e616aba7a93a34d1422c5406d279a04565e481c27adfc15e57756a60108d1",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210813204143-13784 -n running-upgrade-20210813204143-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210813204143-13784 -n running-upgrade-20210813204143-13784: exit status 4 (303.244333ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:44:04.902460  222070 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20210813204143-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 4 (may be ok)
helpers_test.go:242: "running-upgrade-20210813204143-13784" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "running-upgrade-20210813204143-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210813204143-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210813204143-13784: (2.37365818s)
--- FAIL: TestRunningBinaryUpgrade (144.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (167.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.9.0.005089518.exe start -p stopped-upgrade-20210813204011-13784 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Done: /tmp/minikube-v1.9.0.005089518.exe start -p stopped-upgrade-20210813204011-13784 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m29.25568998s)
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.005089518.exe -p stopped-upgrade-20210813204011-13784 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.005089518.exe -p stopped-upgrade-20210813204011-13784 stop: (11.373458558s)
version_upgrade_test.go:201: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20210813204011-13784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-20210813204011-13784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (1m3.585644337s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210813204011-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node stopped-upgrade-20210813204011-13784 in cluster stopped-upgrade-20210813204011-13784
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-20210813204011-13784" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:41:52.857058  201437 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:41:52.857147  201437 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:52.857152  201437 out.go:311] Setting ErrFile to fd 2...
	I0813 20:41:52.857156  201437 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:52.857284  201437 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:41:52.857554  201437 out.go:305] Setting JSON to false
	I0813 20:41:52.900482  201437 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5075,"bootTime":1628882237,"procs":276,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:41:52.900617  201437 start.go:121] virtualization: kvm guest
	I0813 20:41:52.902661  201437 out.go:177] * [stopped-upgrade-20210813204011-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:41:52.902766  201437 notify.go:169] Checking for updates...
	I0813 20:41:52.903987  201437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:52.905215  201437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:41:52.906477  201437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:41:52.907662  201437 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:41:52.908028  201437 config.go:177] Loaded profile config "stopped-upgrade-20210813204011-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0813 20:41:52.908046  201437 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:41:52.910162  201437 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 20:41:52.910215  201437 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:41:52.964053  201437 docker.go:132] docker version: linux-19.03.15
	I0813 20:41:52.964153  201437 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:53.064549  201437 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:99 SystemTime:2021-08-13 20:41:53.01009145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:41:53.064703  201437 docker.go:244] overlay module found
	I0813 20:41:53.066897  201437 out.go:177] * Using the docker driver based on existing profile
	I0813 20:41:53.066929  201437 start.go:278] selected driver: docker
	I0813 20:41:53.066937  201437 start.go:751] validating driver "docker" against &{Name:stopped-upgrade-20210813204011-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210813204011-13784 Namespace: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:53.067051  201437 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:41:53.067093  201437 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:53.067113  201437 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 20:41:53.068519  201437 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:53.069612  201437 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:53.178579  201437 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:103 SystemTime:2021-08-13 20:41:53.117012694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:41:53.178778  201437 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:53.178823  201437 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 20:41:53.180336  201437 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:53.180427  201437 cni.go:93] Creating CNI manager for ""
	I0813 20:41:53.180449  201437 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 20:41:53.180468  201437 start_flags.go:277] config:
	{Name:stopped-upgrade-20210813204011-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210813204011-13784 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:53.182050  201437 out.go:177] * Starting control plane node stopped-upgrade-20210813204011-13784 in cluster stopped-upgrade-20210813204011-13784
	I0813 20:41:53.182095  201437 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:41:53.183188  201437 out.go:177] * Pulling base image ...
	I0813 20:41:53.183219  201437 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0813 20:41:53.183309  201437 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:41:53.350641  201437 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:41:53.350687  201437 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	W0813 20:41:53.455816  201437 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0813 20:41:53.456012  201437 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813204011-13784/config.json ...
	I0813 20:41:53.456071  201437 cache.go:108] acquiring lock: {Name:mkba69b0e6f833bbc3169832b699a2072359fe89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456246  201437 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:41:53.456280  201437 start.go:313] acquiring machines lock for stopped-upgrade-20210813204011-13784: {Name:mkcf799067739d1fb4c4c42e2948741bf92c8d8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456274  201437 cache.go:108] acquiring lock: {Name:mkaafb553f1f54e424d391e5c7ae9df3395e5d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456321  201437 cache.go:108] acquiring lock: {Name:mkb7a1f68bff3ac15aa63313333156cb053d897e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456380  201437 start.go:317] acquired machines lock for "stopped-upgrade-20210813204011-13784" in 77.666µs
	I0813 20:41:53.456371  201437 cache.go:108] acquiring lock: {Name:mk8de78ef83f94d848b402a4790406f2744f8a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456400  201437 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:41:53.456407  201437 fix.go:55] fixHost starting: m01
	I0813 20:41:53.456411  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 20:41:53.456429  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0813 20:41:53.456431  201437 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 121.443µs
	I0813 20:41:53.456444  201437 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 20:41:53.456447  201437 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 79.891µs
	I0813 20:41:53.456460  201437 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0813 20:41:53.456474  201437 cache.go:108] acquiring lock: {Name:mkb37ad652311ea74582253088e2998e28960ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456497  201437 cache.go:108] acquiring lock: {Name:mkaacd7607e2526208a4e774ed0834b86580f6d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456531  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0813 20:41:53.456543  201437 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 71.249µs
	I0813 20:41:53.456553  201437 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0813 20:41:53.456559  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 20:41:53.456566  201437 cache.go:108] acquiring lock: {Name:mk4e9714a804de474fdc1c995487a08b3b2bd64e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456585  201437 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 88.708µs
	I0813 20:41:53.456603  201437 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 20:41:53.456610  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0813 20:41:53.456616  201437 cache.go:108] acquiring lock: {Name:mkf49ffbb332301f49bb0b6961b0fb2c9c638317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456459  201437 cache.go:108] acquiring lock: {Name:mk79dcdbc91acd3fc64f2555bafb2eeea50bc520 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456661  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0813 20:41:53.456668  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 20:41:53.456671  201437 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 215.787µs
	I0813 20:41:53.456681  201437 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0813 20:41:53.456681  201437 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 67.054µs
	I0813 20:41:53.456692  201437 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 20:41:53.456291  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 20:41:53.456698  201437 cache.go:108] acquiring lock: {Name:mkae1ad5a856cae1b322db6f8ea3f3d6badfb5fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:53.456712  201437 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 653.024µs
	I0813 20:41:53.456722  201437 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 20:41:53.456621  201437 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 57.116µs
	I0813 20:41:53.456729  201437 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0813 20:41:53.456735  201437 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210813204011-13784 --format={{.State.Status}}
	I0813 20:41:53.456746  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0813 20:41:53.456756  201437 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0813 20:41:53.456758  201437 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 62.162µs
	I0813 20:41:53.456768  201437 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0813 20:41:53.456768  201437 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 501.121µs
	I0813 20:41:53.456784  201437 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0813 20:41:53.456792  201437 cache.go:88] Successfully saved all images to host disk.
	I0813 20:41:53.527731  201437 fix.go:108] recreateIfNeeded on stopped-upgrade-20210813204011-13784: state=Stopped err=<nil>
	W0813 20:41:53.527761  201437 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:41:53.529941  201437 out.go:177] * Restarting existing docker container for "stopped-upgrade-20210813204011-13784" ...
	I0813 20:41:53.530023  201437 cli_runner.go:115] Run: docker start stopped-upgrade-20210813204011-13784
	I0813 20:41:54.166141  201437 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210813204011-13784 --format={{.State.Status}}
	I0813 20:41:54.212494  201437 kic.go:420] container "stopped-upgrade-20210813204011-13784" state is running.
	I0813 20:41:54.319708  201437 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210813204011-13784
	I0813 20:41:54.363379  201437 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813204011-13784/config.json ...
	I0813 20:41:54.820493  201437 machine.go:88] provisioning docker machine ...
	I0813 20:41:54.820567  201437 ubuntu.go:169] provisioning hostname "stopped-upgrade-20210813204011-13784"
	I0813 20:41:54.820637  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:54.862770  201437 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:54.862963  201437 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32920 <nil> <nil>}
	I0813 20:41:54.862979  201437 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210813204011-13784 && echo "stopped-upgrade-20210813204011-13784" | sudo tee /etc/hostname
	I0813 20:41:54.863538  201437 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50386->127.0.0.1:32920: read: connection reset by peer
	I0813 20:41:58.036670  201437 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210813204011-13784
	
	I0813 20:41:58.036755  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:58.077845  201437 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:58.078048  201437 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32920 <nil> <nil>}
	I0813 20:41:58.078087  201437 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210813204011-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210813204011-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210813204011-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:41:58.181016  201437 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:41:58.181047  201437 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:41:58.181093  201437 ubuntu.go:177] setting up certificates
	I0813 20:41:58.181103  201437 provision.go:83] configureAuth start
	I0813 20:41:58.181153  201437 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210813204011-13784
	I0813 20:41:58.224041  201437 provision.go:138] copyHostCerts
	I0813 20:41:58.224115  201437 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:41:58.224128  201437 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:41:58.224189  201437 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:41:58.224289  201437 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:41:58.224306  201437 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:41:58.224334  201437 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:41:58.224411  201437 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:41:58.224423  201437 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:41:58.224448  201437 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:41:58.224503  201437 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20210813204011-13784 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-20210813204011-13784]
	I0813 20:41:58.537811  201437 provision.go:172] copyRemoteCerts
	I0813 20:41:58.537861  201437 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:41:58.537901  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:58.588196  201437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32920 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/stopped-upgrade-20210813204011-13784/id_rsa Username:docker}
	I0813 20:41:58.673431  201437 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0813 20:41:58.689416  201437 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:41:58.706965  201437 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:41:58.724430  201437 provision.go:86] duration metric: configureAuth took 543.309281ms
	I0813 20:41:58.724462  201437 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:41:58.724680  201437 config.go:177] Loaded profile config "stopped-upgrade-20210813204011-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0813 20:41:58.724845  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:58.777662  201437 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:58.777842  201437 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32920 <nil> <nil>}
	I0813 20:41:58.777859  201437 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:41:59.386481  201437 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:41:59.386524  201437 machine.go:91] provisioned docker machine in 4.566004005s
	I0813 20:41:59.386536  201437 start.go:267] post-start starting for "stopped-upgrade-20210813204011-13784" (driver="docker")
	I0813 20:41:59.386544  201437 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:59.386623  201437 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:59.386669  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:59.433203  201437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32920 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/stopped-upgrade-20210813204011-13784/id_rsa Username:docker}
	I0813 20:41:59.513197  201437 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:59.516002  201437 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:59.516025  201437 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:59.516038  201437 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:59.516045  201437 info.go:137] Remote host: Ubuntu 19.10
	I0813 20:41:59.516056  201437 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:59.516103  201437 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:59.516204  201437 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:41:59.516306  201437 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:59.522455  201437 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:59.538797  201437 start.go:270] post-start completed in 152.247115ms
	I0813 20:41:59.538869  201437 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:59.538908  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:59.585940  201437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32920 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/stopped-upgrade-20210813204011-13784/id_rsa Username:docker}
	I0813 20:41:59.669226  201437 fix.go:57] fixHost completed within 6.212811713s
	I0813 20:41:59.669255  201437 start.go:80] releasing machines lock for "stopped-upgrade-20210813204011-13784", held for 6.212864397s
	I0813 20:41:59.669338  201437 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210813204011-13784
	I0813 20:41:59.723043  201437 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:59.723085  201437 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:59.723104  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:59.723136  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:59.789611  201437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32920 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/stopped-upgrade-20210813204011-13784/id_rsa Username:docker}
	I0813 20:41:59.805603  201437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32920 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/stopped-upgrade-20210813204011-13784/id_rsa Username:docker}
	I0813 20:42:00.164988  201437 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:42:00.270107  201437 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:42:00.280120  201437 docker.go:153] disabling docker service ...
	I0813 20:42:00.280182  201437 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:42:00.302539  201437 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:42:00.311566  201437 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:42:00.358838  201437 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:42:00.431116  201437 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:42:00.440944  201437 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:42:00.452887  201437 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0813 20:42:00.462465  201437 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:42:00.469152  201437 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:42:00.469212  201437 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:42:00.476687  201437 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:42:00.483953  201437 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:42:00.535910  201437 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:42:00.622024  201437 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:42:00.622092  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:00.625421  201437 retry.go:31] will retry after 1.104660288s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:01.731128  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:01.734516  201437 retry.go:31] will retry after 2.160763633s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:03.895844  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:03.899520  201437 retry.go:31] will retry after 2.62026012s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:06.521604  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:06.525075  201437 retry.go:31] will retry after 3.164785382s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:09.690920  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:09.694817  201437 retry.go:31] will retry after 4.680977329s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:14.376647  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:14.379922  201437 retry.go:31] will retry after 9.01243771s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:23.393415  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:23.396995  201437 retry.go:31] will retry after 6.442959172s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:29.841644  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:29.845030  201437 retry.go:31] will retry after 11.217246954s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:41.062567  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:41.066320  201437 retry.go:31] will retry after 15.299675834s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 20:42:56.366730  201437 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:42:56.373035  201437 out.go:177] 
	W0813 20:42:56.373206  201437 out.go:242] X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	W0813 20:42:56.373222  201437 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:42:56.375276  201437 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                          │
	│                                                                                                                                                        │
	│    * Please attach the following file to the GitHub issue:                                                                                             │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                          │
	│                                                                                                                                                        │
	│    * Please attach the following file to the GitHub issue:                                                                                             │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 20:42:56.376899  201437 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:203: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-20210813204011-13784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-08-13 20:42:56.393863529 +0000 UTC m=+2106.315300890
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStoppedBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect stopped-upgrade-20210813204011-13784
helpers_test.go:236: (dbg) docker inspect stopped-upgrade-20210813204011-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bd13ca6f4e2b8aa1358eba314a12c11a23e138e08845b9918bc8e462dbb30de4",
	        "Created": "2021-08-13T20:40:12.861281959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:41:54.159943892Z",
	            "FinishedAt": "2021-08-13T20:41:52.337682203Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/bd13ca6f4e2b8aa1358eba314a12c11a23e138e08845b9918bc8e462dbb30de4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bd13ca6f4e2b8aa1358eba314a12c11a23e138e08845b9918bc8e462dbb30de4/hostname",
	        "HostsPath": "/var/lib/docker/containers/bd13ca6f4e2b8aa1358eba314a12c11a23e138e08845b9918bc8e462dbb30de4/hosts",
	        "LogPath": "/var/lib/docker/containers/bd13ca6f4e2b8aa1358eba314a12c11a23e138e08845b9918bc8e462dbb30de4/bd13ca6f4e2b8aa1358eba314a12c11a23e138e08845b9918bc8e462dbb30de4-json.log",
	        "Name": "/stopped-upgrade-20210813204011-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "stopped-upgrade-20210813204011-13784:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3e922e678107891b76962411cbf012b418e648855e1304bd658e980516e4642e-init/diff:/var/lib/docker/overlay2/de6af85d43ab6de82a80599c78c852ce945860493e987ae8d4747813e3e12e71/diff:/var/lib/docker/overlay2/1463f2b27e2cf184f9e8a7e127a3f6ecaa9eb4e8c586d13eb98ef0034f418eca/diff:/var/lib/docker/overlay2/6fae380631f93f264fc69450c6bd514661e47e2e598e586796b4ef5487d2609b/diff:/var/lib/docker/overlay2/9455405085a27b776dbc930a9422413a8738ee14a396dba1428ad3477dd78d19/diff:/var/lib/docker/overlay2/872cbd16ad0ea1d1a8643af87081f3ffd14a4cc7bb05e0117ff9630a1e4c2d63/diff:/var/lib/docker/overlay2/1cfe85b8b9110dde1cfd7cd18efd634d01d4c6b46da62d17a26da23aa02686be/diff:/var/lib/docker/overlay2/189b625246c097ae32fa419f11770e2e28b30b39afd65b82dc25c55530584d10/diff:/var/lib/docker/overlay2/f5b5179d9c5187ae940c59c3a026ef190561c0532770dbd761fecfc6251ebc05/diff:/var/lib/docker/overlay2/116a802d8be0890169902c8fcb2ad1b64b5391fa1a060c1f02d344668cf1e40f/diff:/var/lib/docker/overlay2/d335f4
f8874ac51d7120bb297af4bf45b5ab1c41f3977cabfa2149948695c6e9/diff:/var/lib/docker/overlay2/cfc70be91e8c4eaba2033239d05c70abdaaae7922eebe0a9694302cde2259694/diff:/var/lib/docker/overlay2/901fced2d4ec35a47265e02248dd5ae2f3130431109d25e604d2ab568d1bde04/diff:/var/lib/docker/overlay2/7aa7e86939390a956567b669d4bab83fb60927bb30f5a9803342e0d68bd3e23f/diff:/var/lib/docker/overlay2/a482a71267c1aded8aadff398336811f3437dec13bdea6065ac47ad1eb5eed5f/diff:/var/lib/docker/overlay2/972f22e2510a2c07193729807506aedac3ec49bb2063b2b7c3e443b7380a91c5/diff:/var/lib/docker/overlay2/8c845952b97a856c0093d30bbe000f51feda3cb8d3a525e83d8633d5af175938/diff:/var/lib/docker/overlay2/85f0f897ba04db0a863dd2628b8b2e7d3539cecbb6acd1530907b350763c6550/diff:/var/lib/docker/overlay2/f4060f75e85c12bf3ba15020ed3c17665bed2409afc88787b2341c6d5af01040/diff:/var/lib/docker/overlay2/7fa8f93d5ee1866f01fa7288d688713da7f1044a1942eb59534b94cb95cc3d74/diff:/var/lib/docker/overlay2/0d91418cf4c9ce3175fcb432fd443e696caae83859f6d5e10cdfaf102243e189/diff:/var/lib/d
ocker/overlay2/f4f812cd2dd5b0b125eea4bff29d3ed0d34fa877c492159a8b8b6aee1f536d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e922e678107891b76962411cbf012b418e648855e1304bd658e980516e4642e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e922e678107891b76962411cbf012b418e648855e1304bd658e980516e4642e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e922e678107891b76962411cbf012b418e648855e1304bd658e980516e4642e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "stopped-upgrade-20210813204011-13784",
	                "Source": "/var/lib/docker/volumes/stopped-upgrade-20210813204011-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "stopped-upgrade-20210813204011-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "stopped-upgrade-20210813204011-13784",
	                "name.minikube.sigs.k8s.io": "stopped-upgrade-20210813204011-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c3431b6ba99b22ce93fd66ea02c9d57416bb22f562e0abdbf024b3dc4c54bc3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32920"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32919"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32918"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7c3431b6ba99",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "969ab8407033dac31900b79dda4697e6768cfb65266d7961d69cfb649d0b04ca",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "36469bd726c69f7f4e80095ea481a26b8d464ff05a26c4950694cdc1005196f7",
	                    "EndpointID": "969ab8407033dac31900b79dda4697e6768cfb65266d7961d69cfb649d0b04ca",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210813204011-13784 -n stopped-upgrade-20210813204011-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210813204011-13784 -n stopped-upgrade-20210813204011-13784: exit status 6 (307.272917ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:42:56.732357  214074 status.go:413] kubeconfig endpoint: extract IP: "stopped-upgrade-20210813204011-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 6 (may be ok)
helpers_test.go:242: "stopped-upgrade-20210813204011-13784" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "stopped-upgrade-20210813204011-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20210813204011-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20210813204011-13784: (2.229443417s)
--- FAIL: TestStoppedBinaryUpgrade (167.50s)

                                                
                                    
x
+
TestPause/serial/Pause (29.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813203929-13784 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210813203929-13784 --alsologtostderr -v=5: exit status 80 (1.896379312s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210813203929-13784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:40:57.390440  186741 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:40:57.390525  186741 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:57.390533  186741 out.go:311] Setting ErrFile to fd 2...
	I0813 20:40:57.390536  186741 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:57.390640  186741 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:40:57.390838  186741 out.go:305] Setting JSON to false
	I0813 20:40:57.390866  186741 mustload.go:65] Loading cluster: pause-20210813203929-13784
	I0813 20:40:57.391158  186741 config.go:177] Loaded profile config "pause-20210813203929-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:40:57.391525  186741 cli_runner.go:115] Run: docker container inspect pause-20210813203929-13784 --format={{.State.Status}}
	I0813 20:40:57.430167  186741 host.go:66] Checking if "pause-20210813203929-13784" exists ...
	I0813 20:40:57.430834  186741 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210813203929-13784 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:40:57.433136  186741 out.go:177] * Pausing node pause-20210813203929-13784 ... 
	I0813 20:40:57.433172  186741 host.go:66] Checking if "pause-20210813203929-13784" exists ...
	I0813 20:40:57.433447  186741 ssh_runner.go:149] Run: systemctl --version
	I0813 20:40:57.433512  186741 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:57.472470  186741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:40:57.566027  186741 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:40:57.575179  186741 pause.go:50] kubelet running: true
	I0813 20:40:57.575241  186741 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:40:57.696417  186741 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:40:57.696512  186741 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:40:57.769714  186741 cri.go:76] found id: "8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b"
	I0813 20:40:57.769748  186741 cri.go:76] found id: "f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5"
	I0813 20:40:57.769757  186741 cri.go:76] found id: "0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d"
	I0813 20:40:57.769763  186741 cri.go:76] found id: "15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e"
	I0813 20:40:57.769769  186741 cri.go:76] found id: "765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803"
	I0813 20:40:57.769778  186741 cri.go:76] found id: "ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662"
	I0813 20:40:57.769783  186741 cri.go:76] found id: "de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637"
	I0813 20:40:57.769788  186741 cri.go:76] found id: "0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f"
	I0813 20:40:57.769793  186741 cri.go:76] found id: ""
	I0813 20:40:57.769835  186741 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:40:57.818501  186741 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","pid":1319,"status":"running","bundle":"/run/containers/storage/overlay-containers/0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f/userdata","rootfs":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","created":"2021-08-13T20:39:55.733812047Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.502928921Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"k
ube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/containers/kube-apiserver/244c51
68\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kuberne
tes.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","pid":2173,"status":"running","bundle":"/run/containers/storage/overlay-containers/0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d/userdata","rootfs":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","created":"2021-08-13T20:40:24.257677146Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b0cd6686","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.
cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b0cd6686\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.097525291Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc33
9c7f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\
"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/containers/kindnet-cni/5f00273b\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/volumes/kubernetes.io~projected/kube-api-access-wjm59\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","kubernetes.io/config.seen":"2021-
08-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","pid":2103,"status":"running","bundle":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata","rootfs":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","created":"2021-08-13T20:40:23.941830088Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.522016337Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.ku
bernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.848566608Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-pjb6w","io.kubernetes.cri-o.Labels":"{\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"7cdcb64568\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-db
e323a2b35f/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-pjb6w\",\"uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SeccompProfilePath":"runtime/de
fault","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/shm","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","pid":2167,"status":"running","bundle":"/run/containers/storage/overlay-containers/15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e/userdata","rootfs":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","created":"2021-08-13T20:40:24.19384482Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.
hash":"6ea07f15","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6ea07f15\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.079504068Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
"io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-dbe323a2b35f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7
fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/containers/kube-proxy/93f08aa8\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03
-4939-a057-dbe323a2b35f/volumes/kubernetes.io~projected/kube-api-access-8mjqv\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","pid":2690,"status":"running","bundle":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata","rootfs":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe46b1f0d00ec0757ccb1ee5d1b565323/merged","created":"2021-08-13T20:40:49.101759063Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":
"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.983557664Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth9a8d7a44\",\"mac\":\"2a:cb:06:90:ab:63\"},{\"name\":\"eth0\",\"mac\":\"d6:75:db:96:9a:5d\",\"sandbox\":\"/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:48.952217174Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.HostNetwork":"false",
"io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-ts9sl\",\"uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe4
6b1f0d00ec0757ccb1ee5d1b565323/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","k8s-app":"kube
-dns","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","pid":2100,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata","rootfs":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","created":"2021-08-13T20:40:23.95809172Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.523876061Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3a829ab2057cc070db7b625eea9bac1158d09da50
66242b708533877fd257658","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.852094836Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-k8wlb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kindnet\",\"controller-revision-hash\":\"694b6fb659\",\"app\":\"kindnet\",\"tier\":\"node\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pod
s/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-k8wlb\",\"uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.
kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/shm","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","pid":1206,"status":"running","bundle":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata","rootfs":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","created":"2021-08-13T20:39:55.433788257Z","annotations":
{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"4ebf0a68eff661e9c135374acf699695\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967492810Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.272991516Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cr
i-o.KubeName":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813203929-13784\",\"uid\":\"4ebf0a68eff661e9c135374acf699695\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.c
ri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source"
:"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","pid":1180,"status":"running","bundle":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata","rootfs":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","created":"2021-08-13T20:39:55.397924075Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967490591Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"436db4ab234524
af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.265342067Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o
.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813203929-13784\",\"uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kube
rnetes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","pid":1187,"status":"running","bundle":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979d
fb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata","rootfs":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","created":"2021-08-13T20:39:55.39391506Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967491922Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.269717758Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNe
twork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"component\":\"kube-controller-manager\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813203929-13784\",\"uid\":\"13241a9162471f4b325d1046e0460e76\",\"namespace\":\"kube-sy
stem\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/shm","io.kubernetes.pod.name":"
kube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","pid":1346,"status":"running","bundle":"/run/containers/storage/overlay-containers/765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803/userdata","rootfs":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","created":"2021-08-13T20:39:55.793791351Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"58d4e8b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container
.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"58d4e8b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.570582977Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\
",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false",
"io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/containers/etcd/d93341cb\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-0
8-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b","pid":3617,"status":"running","bundle":"/run/containers/storage/overlay-containers/8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b/userdata","rootfs":"/var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged","created":"2021-08-13T20:40:56.645788504Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f2d196de","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f2d196de\",\"io.kubernetes
.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:56.465829141Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_stora
ge-provisioner_5bba0aa8-5d05-4858-b5af-a2456279867c/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\
",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/containers/storage-provisioner/2f2769ad\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/volumes/kubernetes.io~projected/kube-api-access-qszs7\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5bba0aa8-5d05-4858-b5af-a2456279867c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-pr
ovisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:40:56.004913297Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","pid":3583,"status":"running","bundle":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata","rootfs":"/var/lib/containers/storage/overlay/581bfa6fb2fdccbb85ffc8650b99be95
83395c98d8baee0eb3569e7ac720cb36/merged","created":"2021-08-13T20:40:56.413738967Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:56.004913297Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath
\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:56.320190008Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\
"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5bba0aa8-5d05-4858-b5af-a2456279867c/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/581bfa6fb2fdccbb85ffc8650b99be9583395c98d8baee0eb3569e7ac720cb36/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappi
ngs":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5bba0aa8-5d05-4858-b5af-a2456279867c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\
"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:40:56.004913297Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","pid":1326,"status":"running","bundle":"/run/containers/storage/overlay-containers/de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637/userdata","rootfs":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","created":"2021-08-13T20:39:55.773726316Z","annotations":{"i
o.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.507405986Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri
-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd12
25437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/containers/kube-controller-manager/2eb45d77\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/
controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd
.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","pid":1190,"status":"running","bundle":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata","rootfs":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","created":"2021-08-13T20:39:55.407125139Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"4a6c9153825faff90e9c8767408e0ebc\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967469748Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"eae2bc9a9df7cb31a
6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.263344855Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-p
ause-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813203929-13784\",\"uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b
40e10399e0d3e89f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-08-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","pid":1337,"status":"running","bundle":"/run/containers/storage/overlay-containers/ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662/userdata","rootfs":"/var/lib/containers/storage/
overlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","created":"2021-08-13T20:39:55.793789894Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.519029007Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7d
bb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage
/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/containers/kube-scheduler/d30fe10b\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784
","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","pid":2722,"status":"running","bundle":"/run/containers/storage/overlay-containers/f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5/userdata","rootfs":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece564d0c97a04c0b83841935e/merged","created":"2021-08-13T20:40:49.313790208Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"287a3d56","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports"
:"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"287a3d56\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f5e9
60ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:49.16097913Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece5
64d0c97a04c0b83841935e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/etc-hosts\",\"readonly
\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/containers/coredns/88d76d02\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~projected/kube-api-access-mdnqp\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0813 20:40:57.819310  186741 cri.go:113] list returned 16 containers
	I0813 20:40:57.819334  186741 cri.go:116] container: {ID:0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f Status:running}
	I0813 20:40:57.819363  186741 cri.go:116] container: {ID:0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d Status:running}
	I0813 20:40:57.819367  186741 cri.go:116] container: {ID:125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 Status:running}
	I0813 20:40:57.819372  186741 cri.go:118] skipping 125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 - not in ps
	I0813 20:40:57.819380  186741 cri.go:116] container: {ID:15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e Status:running}
	I0813 20:40:57.819410  186741 cri.go:116] container: {ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 Status:running}
	I0813 20:40:57.819415  186741 cri.go:118] skipping 32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 - not in ps
	I0813 20:40:57.819419  186741 cri.go:116] container: {ID:3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 Status:running}
	I0813 20:40:57.819423  186741 cri.go:118] skipping 3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 - not in ps
	I0813 20:40:57.819431  186741 cri.go:116] container: {ID:3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b Status:running}
	I0813 20:40:57.819436  186741 cri.go:118] skipping 3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b - not in ps
	I0813 20:40:57.819442  186741 cri.go:116] container: {ID:436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff Status:running}
	I0813 20:40:57.819446  186741 cri.go:118] skipping 436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff - not in ps
	I0813 20:40:57.819453  186741 cri.go:116] container: {ID:4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 Status:running}
	I0813 20:40:57.819457  186741 cri.go:118] skipping 4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 - not in ps
	I0813 20:40:57.819463  186741 cri.go:116] container: {ID:765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803 Status:running}
	I0813 20:40:57.819467  186741 cri.go:116] container: {ID:8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b Status:running}
	I0813 20:40:57.819474  186741 cri.go:116] container: {ID:c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 Status:running}
	I0813 20:40:57.819478  186741 cri.go:118] skipping c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 - not in ps
	I0813 20:40:57.819485  186741 cri.go:116] container: {ID:de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637 Status:running}
	I0813 20:40:57.819493  186741 cri.go:116] container: {ID:eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f Status:running}
	I0813 20:40:57.819499  186741 cri.go:118] skipping eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f - not in ps
	I0813 20:40:57.819503  186741 cri.go:116] container: {ID:ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662 Status:running}
	I0813 20:40:57.819510  186741 cri.go:116] container: {ID:f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5 Status:running}
	I0813 20:40:57.819550  186741 ssh_runner.go:149] Run: sudo runc pause 0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f
	I0813 20:40:57.834288  186741 ssh_runner.go:149] Run: sudo runc pause 0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f 0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d
	I0813 20:40:57.847228  186741 retry.go:31] will retry after 276.165072ms: runc: sudo runc pause 0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f 0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:40:57Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:40:58.123694  186741 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:40:58.132972  186741 pause.go:50] kubelet running: false
	I0813 20:40:58.133022  186741 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:40:58.252551  186741 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:40:58.252634  186741 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:40:58.328392  186741 cri.go:76] found id: "8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b"
	I0813 20:40:58.328419  186741 cri.go:76] found id: "f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5"
	I0813 20:40:58.328426  186741 cri.go:76] found id: "0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d"
	I0813 20:40:58.328431  186741 cri.go:76] found id: "15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e"
	I0813 20:40:58.328434  186741 cri.go:76] found id: "765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803"
	I0813 20:40:58.328440  186741 cri.go:76] found id: "ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662"
	I0813 20:40:58.328445  186741 cri.go:76] found id: "de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637"
	I0813 20:40:58.328451  186741 cri.go:76] found id: "0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f"
	I0813 20:40:58.328457  186741 cri.go:76] found id: ""
	I0813 20:40:58.328502  186741 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:40:58.369916  186741 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","pid":1319,"status":"paused","bundle":"/run/containers/storage/overlay-containers/0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f/userdata","rootfs":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","created":"2021-08-13T20:39:55.733812047Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminat
ionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.502928921Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"ku
be-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/containers/kube-apiserver/244c516
8\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernet
es.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","pid":2173,"status":"running","bundle":"/run/containers/storage/overlay-containers/0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d/userdata","rootfs":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","created":"2021-08-13T20:40:24.257677146Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b0cd6686","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.c
ri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b0cd6686\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.097525291Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc339
c7f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"
/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/containers/kindnet-cni/5f00273b\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/volumes/kubernetes.io~projected/kube-api-access-wjm59\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","kubernetes.io/config.seen":"2021-0
8-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","pid":2103,"status":"running","bundle":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata","rootfs":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","created":"2021-08-13T20:40:23.941830088Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.522016337Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kub
ernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.848566608Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-pjb6w","io.kubernetes.cri-o.Labels":"{\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"7cdcb64568\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-dbe
323a2b35f/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-pjb6w\",\"uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SeccompProfilePath":"runtime/def
ault","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/shm","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","pid":2167,"status":"running","bundle":"/run/containers/storage/overlay-containers/15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e/userdata","rootfs":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","created":"2021-08-13T20:40:24.19384482Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.h
ash":"6ea07f15","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6ea07f15\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.079504068Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","
io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-dbe323a2b35f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7f
a-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/containers/kube-proxy/93f08aa8\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-
4939-a057-dbe323a2b35f/volumes/kubernetes.io~projected/kube-api-access-8mjqv\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","pid":2690,"status":"running","bundle":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata","rootfs":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe46b1f0d00ec0757ccb1ee5d1b565323/merged","created":"2021-08-13T20:40:49.101759063Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"
POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.983557664Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth9a8d7a44\",\"mac\":\"2a:cb:06:90:ab:63\"},{\"name\":\"eth0\",\"mac\":\"d6:75:db:96:9a:5d\",\"sandbox\":\"/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:48.952217174Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.HostNetwork":"false","
io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-ts9sl\",\"uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe46
b1f0d00ec0757ccb1ee5d1b565323/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","k8s-app":"kube-
dns","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","pid":2100,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata","rootfs":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","created":"2021-08-13T20:40:23.95809172Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.523876061Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3a829ab2057cc070db7b625eea9bac1158d09da506
6242b708533877fd257658","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.852094836Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-k8wlb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kindnet\",\"controller-revision-hash\":\"694b6fb659\",\"app\":\"kindnet\",\"tier\":\"node\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods
/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-k8wlb\",\"uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.k
ubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/shm","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","pid":1206,"status":"running","bundle":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata","rootfs":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","created":"2021-08-13T20:39:55.433788257Z","annotations":{
"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"4ebf0a68eff661e9c135374acf699695\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967492810Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.272991516Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri
-o.KubeName":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813203929-13784\",\"uid\":\"4ebf0a68eff661e9c135374acf699695\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cr
i-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source":
"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","pid":1180,"status":"running","bundle":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata","rootfs":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","created":"2021-08-13T20:39:55.397924075Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967490591Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"436db4ab234524a
f774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.265342067Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.
LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813203929-13784\",\"uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kuber
netes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","pid":1187,"status":"running","bundle":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979df
b0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata","rootfs":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","created":"2021-08-13T20:39:55.39391506Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967491922Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.269717758Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNet
work":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"component\":\"kube-controller-manager\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813203929-13784\",\"uid\":\"13241a9162471f4b325d1046e0460e76\",\"namespace\":\"kube-sys
tem\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/shm","io.kubernetes.pod.name":"k
ube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","pid":1346,"status":"running","bundle":"/run/containers/storage/overlay-containers/765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803/userdata","rootfs":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","created":"2021-08-13T20:39:55.793791351Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"58d4e8b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.
terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"58d4e8b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.570582977Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\"
,\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","
io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/containers/etcd/d93341cb\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-08
-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b","pid":3617,"status":"running","bundle":"/run/containers/storage/overlay-containers/8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b/userdata","rootfs":"/var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged","created":"2021-08-13T20:40:56.645788504Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f2d196de","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f2d196de\",\"io.kubernetes.
container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:56.465829141Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storag
e-provisioner_5bba0aa8-5d05-4858-b5af-a2456279867c/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\"
,\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/containers/storage-provisioner/2f2769ad\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/volumes/kubernetes.io~projected/kube-api-access-qszs7\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5bba0aa8-5d05-4858-b5af-a2456279867c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-pro
visioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:40:56.004913297Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","pid":3583,"status":"running","bundle":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata","rootfs":"/var/lib/containers/storage/overlay/581bfa6fb2fdccbb85ffc8650b99be958
3395c98d8baee0eb3569e7ac720cb36/merged","created":"2021-08-13T20:40:56.413738967Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:56.004913297Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\
\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:56.320190008Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"
addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5bba0aa8-5d05-4858-b5af-a2456279867c/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/581bfa6fb2fdccbb85ffc8650b99be9583395c98d8baee0eb3569e7ac720cb36/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappin
gs":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5bba0aa8-5d05-4858-b5af-a2456279867c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"
spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:40:56.004913297Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","pid":1326,"status":"running","bundle":"/run/containers/storage/overlay-containers/de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637/userdata","rootfs":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","created":"2021-08-13T20:39:55.773726316Z","annotations":{"io
.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.507405986Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-
o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd122
5437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/containers/kube-controller-manager/2eb45d77\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/c
ontroller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.
property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","pid":1190,"status":"running","bundle":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata","rootfs":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","created":"2021-08-13T20:39:55.407125139Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"4a6c9153825faff90e9c8767408e0ebc\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967469748Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"eae2bc9a9df7cb31a6
864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.263344855Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pa
use-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813203929-13784\",\"uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b4
0e10399e0d3e89f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-08-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","pid":1337,"status":"running","bundle":"/run/containers/storage/overlay-containers/ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662/userdata","rootfs":"/var/lib/containers/storage/o
verlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","created":"2021-08-13T20:39:55.793789894Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.519029007Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7db
b1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/
overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/containers/kube-scheduler/d30fe10b\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784"
,"io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","pid":2722,"status":"running","bundle":"/run/containers/storage/overlay-containers/f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5/userdata","rootfs":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece564d0c97a04c0b83841935e/merged","created":"2021-08-13T20:40:49.313790208Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"287a3d56","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":
"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"287a3d56\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f5e96
0ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:49.16097913Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece56
4d0c97a04c0b83841935e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/etc-hosts\",\"readonly\
":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/containers/coredns/88d76d02\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~projected/kube-api-access-mdnqp\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0813 20:40:58.370619  186741 cri.go:113] list returned 16 containers
	I0813 20:40:58.370636  186741 cri.go:116] container: {ID:0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f Status:paused}
	I0813 20:40:58.370646  186741 cri.go:122] skipping {0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f paused}: state = "paused", want "running"
	I0813 20:40:58.370656  186741 cri.go:116] container: {ID:0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d Status:running}
	I0813 20:40:58.370660  186741 cri.go:116] container: {ID:125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 Status:running}
	I0813 20:40:58.370664  186741 cri.go:118] skipping 125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 - not in ps
	I0813 20:40:58.370668  186741 cri.go:116] container: {ID:15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e Status:running}
	I0813 20:40:58.370676  186741 cri.go:116] container: {ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 Status:running}
	I0813 20:40:58.370680  186741 cri.go:118] skipping 32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 - not in ps
	I0813 20:40:58.370684  186741 cri.go:116] container: {ID:3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 Status:running}
	I0813 20:40:58.370688  186741 cri.go:118] skipping 3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 - not in ps
	I0813 20:40:58.370692  186741 cri.go:116] container: {ID:3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b Status:running}
	I0813 20:40:58.370696  186741 cri.go:118] skipping 3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b - not in ps
	I0813 20:40:58.370699  186741 cri.go:116] container: {ID:436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff Status:running}
	I0813 20:40:58.370707  186741 cri.go:118] skipping 436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff - not in ps
	I0813 20:40:58.370713  186741 cri.go:116] container: {ID:4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 Status:running}
	I0813 20:40:58.370718  186741 cri.go:118] skipping 4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 - not in ps
	I0813 20:40:58.370725  186741 cri.go:116] container: {ID:765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803 Status:running}
	I0813 20:40:58.370731  186741 cri.go:116] container: {ID:8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b Status:running}
	I0813 20:40:58.370735  186741 cri.go:116] container: {ID:c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 Status:running}
	I0813 20:40:58.370739  186741 cri.go:118] skipping c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 - not in ps
	I0813 20:40:58.370743  186741 cri.go:116] container: {ID:de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637 Status:running}
	I0813 20:40:58.370747  186741 cri.go:116] container: {ID:eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f Status:running}
	I0813 20:40:58.370751  186741 cri.go:118] skipping eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f - not in ps
	I0813 20:40:58.370754  186741 cri.go:116] container: {ID:ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662 Status:running}
	I0813 20:40:58.370758  186741 cri.go:116] container: {ID:f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5 Status:running}
	I0813 20:40:58.370793  186741 ssh_runner.go:149] Run: sudo runc pause 0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d
	I0813 20:40:58.386586  186741 ssh_runner.go:149] Run: sudo runc pause 0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d 15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e
	I0813 20:40:58.399938  186741 retry.go:31] will retry after 540.190908ms: runc: sudo runc pause 0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d 15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:40:58Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:40:58.940653  186741 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:40:58.950230  186741 pause.go:50] kubelet running: false
	I0813 20:40:58.950309  186741 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:40:59.070258  186741 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:40:59.070357  186741 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:40:59.142129  186741 cri.go:76] found id: "8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b"
	I0813 20:40:59.142159  186741 cri.go:76] found id: "f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5"
	I0813 20:40:59.142166  186741 cri.go:76] found id: "0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d"
	I0813 20:40:59.142172  186741 cri.go:76] found id: "15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e"
	I0813 20:40:59.142177  186741 cri.go:76] found id: "765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803"
	I0813 20:40:59.142188  186741 cri.go:76] found id: "ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662"
	I0813 20:40:59.142194  186741 cri.go:76] found id: "de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637"
	I0813 20:40:59.142198  186741 cri.go:76] found id: "0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f"
	I0813 20:40:59.142203  186741 cri.go:76] found id: ""
	I0813 20:40:59.142247  186741 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:40:59.193479  186741 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","pid":1319,"status":"paused","bundle":"/run/containers/storage/overlay-containers/0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f/userdata","rootfs":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","created":"2021-08-13T20:39:55.733812047Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminat
ionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.502928921Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"ku
be-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/containers/kube-apiserver/244c516
8\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernet
es.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","pid":2173,"status":"paused","bundle":"/run/containers/storage/overlay-containers/0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d/userdata","rootfs":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","created":"2021-08-13T20:40:24.257677146Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b0cd6686","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cr
i-o.Annotations":"{\"io.kubernetes.container.hash\":\"b0cd6686\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.097525291Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc339c
7f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/
run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/containers/kindnet-cni/5f00273b\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/volumes/kubernetes.io~projected/kube-api-access-wjm59\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","kubernetes.io/config.seen":"2021-08
-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","pid":2103,"status":"running","bundle":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata","rootfs":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","created":"2021-08-13T20:40:23.941830088Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.522016337Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kube
rnetes.cri-o.ContainerName":"k8s_POD_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.848566608Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-pjb6w","io.kubernetes.cri-o.Labels":"{\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"7cdcb64568\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-dbe3
23a2b35f/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-pjb6w\",\"uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SeccompProfilePath":"runtime/defa
ult","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/shm","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","pid":2167,"status":"running","bundle":"/run/containers/storage/overlay-containers/15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e/userdata","rootfs":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","created":"2021-08-13T20:40:24.19384482Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.ha
sh":"6ea07f15","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6ea07f15\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.079504068Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","i
o.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-dbe323a2b35f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7fa
-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/containers/kube-proxy/93f08aa8\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4
939-a057-dbe323a2b35f/volumes/kubernetes.io~projected/kube-api-access-8mjqv\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","pid":2690,"status":"running","bundle":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata","rootfs":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe46b1f0d00ec0757ccb1ee5d1b565323/merged","created":"2021-08-13T20:40:49.101759063Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"P
OD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.983557664Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth9a8d7a44\",\"mac\":\"2a:cb:06:90:ab:63\"},{\"name\":\"eth0\",\"mac\":\"d6:75:db:96:9a:5d\",\"sandbox\":\"/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:48.952217174Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.HostNetwork":"false","i
o.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-ts9sl\",\"uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe46b
1f0d00ec0757ccb1ee5d1b565323/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","k8s-app":"kube-d
ns","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","pid":2100,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata","rootfs":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","created":"2021-08-13T20:40:23.95809172Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.523876061Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3a829ab2057cc070db7b625eea9bac1158d09da5066
242b708533877fd257658","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.852094836Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-k8wlb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kindnet\",\"controller-revision-hash\":\"694b6fb659\",\"app\":\"kindnet\",\"tier\":\"node\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/
kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-k8wlb\",\"uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.ku
bernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/shm","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","pid":1206,"status":"running","bundle":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata","rootfs":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","created":"2021-08-13T20:39:55.433788257Z","annotations":{"
component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"4ebf0a68eff661e9c135374acf699695\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967492810Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.272991516Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-
o.KubeName":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813203929-13784\",\"uid\":\"4ebf0a68eff661e9c135374acf699695\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri
-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source":"
file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","pid":1180,"status":"running","bundle":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata","rootfs":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","created":"2021-08-13T20:39:55.397924075Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967490591Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"436db4ab234524af
774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.265342067Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.L
ogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813203929-13784\",\"uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubern
etes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","pid":1187,"status":"running","bundle":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb
0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata","rootfs":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","created":"2021-08-13T20:39:55.39391506Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967491922Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.269717758Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetw
ork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"component\":\"kube-controller-manager\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813203929-13784\",\"uid\":\"13241a9162471f4b325d1046e0460e76\",\"namespace\":\"kube-syst
em\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/shm","io.kubernetes.pod.name":"ku
be-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","pid":1346,"status":"running","bundle":"/run/containers/storage/overlay-containers/765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803/userdata","rootfs":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","created":"2021-08-13T20:39:55.793791351Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"58d4e8b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.t
erminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"58d4e8b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.570582977Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\",
\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","i
o.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/containers/etcd/d93341cb\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-08-
13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b","pid":3617,"status":"running","bundle":"/run/containers/storage/overlay-containers/8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b/userdata","rootfs":"/var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged","created":"2021-08-13T20:40:56.645788504Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f2d196de","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f2d196de\",\"io.kubernetes.c
ontainer.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:56.465829141Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage
-provisioner_5bba0aa8-5d05-4858-b5af-a2456279867c/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",
\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/containers/storage-provisioner/2f2769ad\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/volumes/kubernetes.io~projected/kube-api-access-qszs7\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5bba0aa8-5d05-4858-b5af-a2456279867c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-prov
isioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:40:56.004913297Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","pid":3583,"status":"running","bundle":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata","rootfs":"/var/lib/containers/storage/overlay/581bfa6fb2fdccbb85ffc8650b99be9583
395c98d8baee0eb3569e7ac720cb36/merged","created":"2021-08-13T20:40:56.413738967Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:56.004913297Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\
\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:56.320190008Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"a
ddonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5bba0aa8-5d05-4858-b5af-a2456279867c/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/581bfa6fb2fdccbb85ffc8650b99be9583395c98d8baee0eb3569e7ac720cb36/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMapping
s":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5bba0aa8-5d05-4858-b5af-a2456279867c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"s
pec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:40:56.004913297Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","pid":1326,"status":"running","bundle":"/run/containers/storage/overlay-containers/de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637/userdata","rootfs":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","created":"2021-08-13T20:39:55.773726316Z","annotations":{"io.
container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.507405986Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o
.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225
437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/containers/kube-controller-manager/2eb45d77\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/co
ntroller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.p
roperty.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","pid":1190,"status":"running","bundle":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata","rootfs":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","created":"2021-08-13T20:39:55.407125139Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"4a6c9153825faff90e9c8767408e0ebc\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967469748Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"eae2bc9a9df7cb31a68
64918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.263344855Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pau
se-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813203929-13784\",\"uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40
e10399e0d3e89f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-08-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","pid":1337,"status":"running","bundle":"/run/containers/storage/overlay-containers/ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662/userdata","rootfs":"/var/lib/containers/storage/ov
erlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","created":"2021-08-13T20:39:55.793789894Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.519029007Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb
1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/o
verlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/containers/kube-scheduler/d30fe10b\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784",
"io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","pid":2722,"status":"running","bundle":"/run/containers/storage/overlay-containers/f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5/userdata","rootfs":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece564d0c97a04c0b83841935e/merged","created":"2021-08-13T20:40:49.313790208Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"287a3d56","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"287a3d56\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f5e960
ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:49.16097913Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece564
d0c97a04c0b83841935e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/etc-hosts\",\"readonly\"
:false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/containers/coredns/88d76d02\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~projected/kube-api-access-mdnqp\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0813 20:40:59.194241  186741 cri.go:113] list returned 16 containers
	I0813 20:40:59.194258  186741 cri.go:116] container: {ID:0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f Status:paused}
	I0813 20:40:59.194268  186741 cri.go:122] skipping {0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f paused}: state = "paused", want "running"
	I0813 20:40:59.194280  186741 cri.go:116] container: {ID:0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d Status:paused}
	I0813 20:40:59.194294  186741 cri.go:122] skipping {0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d paused}: state = "paused", want "running"
	I0813 20:40:59.194300  186741 cri.go:116] container: {ID:125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 Status:running}
	I0813 20:40:59.194310  186741 cri.go:118] skipping 125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 - not in ps
	I0813 20:40:59.194318  186741 cri.go:116] container: {ID:15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e Status:running}
	I0813 20:40:59.194322  186741 cri.go:116] container: {ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 Status:running}
	I0813 20:40:59.194328  186741 cri.go:118] skipping 32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 - not in ps
	I0813 20:40:59.194332  186741 cri.go:116] container: {ID:3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 Status:running}
	I0813 20:40:59.194336  186741 cri.go:118] skipping 3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 - not in ps
	I0813 20:40:59.194343  186741 cri.go:116] container: {ID:3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b Status:running}
	I0813 20:40:59.194347  186741 cri.go:118] skipping 3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b - not in ps
	I0813 20:40:59.194353  186741 cri.go:116] container: {ID:436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff Status:running}
	I0813 20:40:59.194359  186741 cri.go:118] skipping 436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff - not in ps
	I0813 20:40:59.194367  186741 cri.go:116] container: {ID:4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 Status:running}
	I0813 20:40:59.194374  186741 cri.go:118] skipping 4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 - not in ps
	I0813 20:40:59.194383  186741 cri.go:116] container: {ID:765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803 Status:running}
	I0813 20:40:59.194390  186741 cri.go:116] container: {ID:8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b Status:running}
	I0813 20:40:59.194400  186741 cri.go:116] container: {ID:c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 Status:running}
	I0813 20:40:59.194409  186741 cri.go:118] skipping c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 - not in ps
	I0813 20:40:59.194417  186741 cri.go:116] container: {ID:de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637 Status:running}
	I0813 20:40:59.194427  186741 cri.go:116] container: {ID:eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f Status:running}
	I0813 20:40:59.194436  186741 cri.go:118] skipping eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f - not in ps
	I0813 20:40:59.194440  186741 cri.go:116] container: {ID:ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662 Status:running}
	I0813 20:40:59.194447  186741 cri.go:116] container: {ID:f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5 Status:running}
	I0813 20:40:59.194490  186741 ssh_runner.go:149] Run: sudo runc pause 15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e
	I0813 20:40:59.210198  186741 ssh_runner.go:149] Run: sudo runc pause 15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e 765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803
	I0813 20:40:59.227475  186741 out.go:177] 
	W0813 20:40:59.227629  186741 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e 765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:40:59Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e 765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:40:59Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:40:59.227648  186741 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:40:59.230467  186741 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:40:59.231819  186741 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210813203929-13784 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-13784
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860",
	        "Created": "2021-08-13T20:39:31.372712772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:31.872578968Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hosts",
	        "LogPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860-json.log",
	        "Name": "/pause-20210813203929-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/merged",
	                "UpperDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/diff",
	                "WorkDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-13784",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a821792d507c6dabf086e5652e018123e85e4b030464132aafdef8bc15a9d200",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a821792d507c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ce53ded591b3"
	                    ],
	                    "NetworkID": "a8af35fe90fb5b850638bd77da889b067a8390ebee6680d76e896390e70a0e9e",
	                    "EndpointID": "0b310d5a393fb3e0184bcf23f10e5a3746cbeb23b4b202e9e5c6f681f15cdcfa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784: exit status 2 (325.519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25: exit status 110 (15.429281257s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:34:32 UTC | Fri, 13 Aug 2021 20:36:24 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false               |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0              |                                           |         |         |                               |                               |
	| ssh     | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:24 UTC | Fri, 13 Aug 2021 20:36:29 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	|         | -- sudo crictl pull busybox               |                                           |         |         |                               |                               |
	| start   | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:29 UTC | Fri, 13 Aug 2021 20:37:03 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker          |                                           |         |         |                               |                               |
	|         |  --container-runtime=crio                 |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3              |                                           |         |         |                               |                               |
	| ssh     | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:03 UTC | Fri, 13 Aug 2021 20:37:03 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	|         | -- sudo crictl image ls                   |                                           |         |         |                               |                               |
	| -p      | test-preload-20210813203431-13784         | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:04 UTC | Fri, 13 Aug 2021 20:37:05 UTC |
	|         | logs -n 25                                |                                           |         |         |                               |                               |
	| delete  | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:05 UTC | Fri, 13 Aug 2021 20:37:10 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	| start   | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:10 UTC | Fri, 13 Aug 2021 20:37:47 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --memory=2048 --driver=docker             |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:48 UTC | Fri, 13 Aug 2021 20:37:48 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --cancel-scheduled                        |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:00 UTC | Fri, 13 Aug 2021 20:38:26 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --schedule 5s                             |                                           |         |         |                               |                               |
	| delete  | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:28 UTC | Fri, 13 Aug 2021 20:38:33 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	| delete  | -p                                        | insufficient-storage-20210813203833-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:40 UTC | Fri, 13 Aug 2021 20:38:46 UTC |
	|         | insufficient-storage-20210813203833-13784 |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	|         | --memory=2048 --force-systemd             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:49 UTC | Fri, 13 Aug 2021 20:39:31 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=5 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:31 UTC | Fri, 13 Aug 2021 20:39:35 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	| start   | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:40:06 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                    |                                           |         |         |                               |                               |
	|         | --memory=2048 --wait=true                 |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:06 UTC | Fri, 13 Aug 2021 20:40:09 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	| delete  | -p                                        | kubenet-20210813204009-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:09 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | kubenet-20210813204009-13784              |                                           |         |         |                               |                               |
	| delete  | -p                                        | flannel-20210813204010-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:10 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | flannel-20210813204010-13784              |                                           |         |         |                               |                               |
	| delete  | -p false-20210813204010-13784             | false-20210813204010-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:11 UTC | Fri, 13 Aug 2021 20:40:11 UTC |
	| start   | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:35 UTC | Fri, 13 Aug 2021 20:40:23 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                 |                                           |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15             |                                           |         |         |                               |                               |
	|         | --apiserver-names=localhost               |                                           |         |         |                               |                               |
	|         | --apiserver-names=www.google.com          |                                           |         |         |                               |                               |
	|         | --apiserver-port=8555                     |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| -p      | cert-options-20210813203935-13784         | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:23 UTC | Fri, 13 Aug 2021 20:40:24 UTC |
	|         | ssh openssl x509 -text -noout -in         |                                           |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:24 UTC | Fri, 13 Aug 2021 20:40:27 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --install-addons=false                    |                                           |         |         |                               |                               |
	|         | --wait=all --driver=docker                |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:57 UTC |
	|         | --alsologtostderr                         |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:40:51
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:40:51.747634  185014 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:40:51.747710  185014 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:51.747719  185014 out.go:311] Setting ErrFile to fd 2...
	I0813 20:40:51.747723  185014 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:51.747819  185014 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:40:51.748031  185014 out.go:305] Setting JSON to false
	I0813 20:40:51.788591  185014 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5014,"bootTime":1628882237,"procs":268,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:40:51.788716  185014 start.go:121] virtualization: kvm guest
	I0813 20:40:51.791422  185014 out.go:177] * [pause-20210813203929-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:40:51.792887  185014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:40:51.791580  185014 notify.go:169] Checking for updates...
	I0813 20:40:51.794185  185014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:40:51.795644  185014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:40:51.797157  185014 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:40:51.797629  185014 config.go:177] Loaded profile config "pause-20210813203929-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:40:51.798015  185014 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:40:51.854714  185014 docker.go:132] docker version: linux-19.03.15
	I0813 20:40:51.854853  185014 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:40:51.944914  185014 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:65 SystemTime:2021-08-13 20:40:51.899475025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:40:51.945011  185014 docker.go:244] overlay module found
	I0813 20:40:51.946868  185014 out.go:177] * Using the docker driver based on existing profile
	I0813 20:40:51.946889  185014 start.go:278] selected driver: docker
	I0813 20:40:51.946895  185014 start.go:751] validating driver "docker" against &{Name:pause-20210813203929-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813203929-13784 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:40:51.946988  185014 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:40:51.947384  185014 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:40:52.030384  185014 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:65 SystemTime:2021-08-13 20:40:51.986497662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:40:52.030955  185014 cni.go:93] Creating CNI manager for ""
	I0813 20:40:52.030971  185014 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:40:52.030981  185014 start_flags.go:277] config:
	{Name:pause-20210813203929-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813203929-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:40:52.033088  185014 out.go:177] * Starting control plane node pause-20210813203929-13784 in cluster pause-20210813203929-13784
	I0813 20:40:52.033128  185014 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:40:52.034521  185014 out.go:177] * Pulling base image ...
	I0813 20:40:52.034552  185014 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:40:52.034588  185014 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:40:52.034611  185014 cache.go:56] Caching tarball of preloaded images
	I0813 20:40:52.034623  185014 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:40:52.034788  185014 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:40:52.034803  185014 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:40:52.034929  185014 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/config.json ...
	I0813 20:40:52.131642  185014 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:40:52.131681  185014 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:40:52.131700  185014 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:40:52.131746  185014 start.go:313] acquiring machines lock for pause-20210813203929-13784: {Name:mkc3c6def57bdb8498093d9c7837d750e75fcf22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:40:52.131846  185014 start.go:317] acquired machines lock for "pause-20210813203929-13784" in 74.952µs
	I0813 20:40:52.131874  185014 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:40:52.131881  185014 fix.go:55] fixHost starting: 
	I0813 20:40:52.132194  185014 cli_runner.go:115] Run: docker container inspect pause-20210813203929-13784 --format={{.State.Status}}
	I0813 20:40:52.173705  185014 fix.go:108] recreateIfNeeded on pause-20210813203929-13784: state=Running err=<nil>
	W0813 20:40:52.173771  185014 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:40:51.843392  180006 out.go:204]   - Configuring RBAC rules ...
	I0813 20:40:52.256386  180006 cni.go:93] Creating CNI manager for ""
	I0813 20:40:52.256415  180006 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:40:52.258141  180006 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:40:52.258220  180006 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:40:52.262009  180006 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0813 20:40:52.262027  180006 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:40:52.275578  180006 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:40:47.772517  184062 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}
	W0813 20:40:47.813473  184062 cli_runner.go:162] docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}} returned with exit code 1
	I0813 20:40:47.813584  184062 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813203846-13784": docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813203846-13784
	I0813 20:40:47.813601  184062 oci.go:646] temporary error: container missing-upgrade-20210813203846-13784 status is  but expect it to be exited
	I0813 20:40:47.813637  184062 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813203846-13784": docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813203846-13784
	I0813 20:40:49.397383  184062 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}
	W0813 20:40:49.440120  184062 cli_runner.go:162] docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}} returned with exit code 1
	I0813 20:40:49.440203  184062 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813203846-13784": docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813203846-13784
	I0813 20:40:49.440231  184062 oci.go:646] temporary error: container missing-upgrade-20210813203846-13784 status is  but expect it to be exited
	I0813 20:40:49.440264  184062 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813203846-13784": docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813203846-13784
	I0813 20:40:51.781615  184062 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}
	W0813 20:40:51.822497  184062 cli_runner.go:162] docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}} returned with exit code 1
	I0813 20:40:51.822573  184062 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813203846-13784": docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813203846-13784
	I0813 20:40:51.822603  184062 oci.go:646] temporary error: container missing-upgrade-20210813203846-13784 status is  but expect it to be exited
	I0813 20:40:51.822629  184062 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813203846-13784": docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813203846-13784
	I0813 20:40:52.598255  180006 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:40:52.598344  180006 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:40:52.598345  180006 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=kubernetes-upgrade-20210813204027-13784 minikube.k8s.io/updated_at=2021_08_13T20_40_52_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:40:52.615971  180006 ops.go:34] apiserver oom_adj: 16
	I0813 20:40:52.615990  180006 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 20:40:52.616004  180006 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:40:52.708239  180006 kubeadm.go:985] duration metric: took 109.952952ms to wait for elevateKubeSystemPrivileges.
	I0813 20:40:52.708298  180006 kubeadm.go:392] StartCluster complete in 15.706418119s
	I0813 20:40:52.708332  180006 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:40:52.708428  180006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:40:52.709879  180006 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:40:52.710869  180006 kapi.go:59] client config for kubernetes-upgrade-20210813204027-13784: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kuberne
tes-upgrade-20210813204027-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:40:53.228230  180006 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20210813204027-13784" rescaled to 1
	I0813 20:40:53.228282  180006 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 20:40:53.230311  180006 out.go:177] * Verifying Kubernetes components...
	I0813 20:40:53.228342  180006 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:40:53.228370  180006 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:40:53.230527  180006 addons.go:59] Setting storage-provisioner=true in profile "kubernetes-upgrade-20210813204027-13784"
	I0813 20:40:53.230550  180006 addons.go:135] Setting addon storage-provisioner=true in "kubernetes-upgrade-20210813204027-13784"
	I0813 20:40:53.228558  180006 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204027-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:40:53.230371  180006 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:40:53.230573  180006 addons.go:59] Setting default-storageclass=true in profile "kubernetes-upgrade-20210813204027-13784"
	I0813 20:40:53.230607  180006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20210813204027-13784"
	W0813 20:40:53.230578  180006 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:40:53.230767  180006 host.go:66] Checking if "kubernetes-upgrade-20210813204027-13784" exists ...
	I0813 20:40:53.230995  180006 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:40:53.231370  180006 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:40:53.288555  180006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:40:53.288301  180006 kapi.go:59] client config for kubernetes-upgrade-20210813204027-13784: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kuberne
tes-upgrade-20210813204027-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:40:53.288686  180006 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:40:53.288701  180006 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:40:53.288757  180006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:40:53.293143  180006 addons.go:135] Setting addon default-storageclass=true in "kubernetes-upgrade-20210813204027-13784"
	W0813 20:40:53.293173  180006 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:40:53.293206  180006 host.go:66] Checking if "kubernetes-upgrade-20210813204027-13784" exists ...
	I0813 20:40:53.293894  180006 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:40:53.321243  180006 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:40:53.322344  180006 kapi.go:59] client config for kubernetes-upgrade-20210813204027-13784: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kuberne
tes-upgrade-20210813204027-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:40:53.323960  180006 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:40:53.323999  180006 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:40:53.351641  180006 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:40:53.351674  180006 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:40:53.351747  180006 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:40:53.364051  180006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:40:53.407199  180006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:40:53.521137  180006 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:40:53.571297  180006 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 20:40:53.571376  180006 api_server.go:70] duration metric: took 343.065937ms to wait for apiserver process to appear ...
	I0813 20:40:53.571398  180006 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:40:53.571410  180006 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0813 20:40:53.573286  180006 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:40:53.577134  180006 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0813 20:40:53.578130  180006 api_server.go:139] control plane version: v1.14.0
	I0813 20:40:53.578152  180006 api_server.go:129] duration metric: took 6.747375ms to wait for apiserver health ...
	I0813 20:40:53.578164  180006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:40:53.587369  180006 system_pods.go:59] 0 kube-system pods found
	I0813 20:40:53.587431  180006 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
	I0813 20:40:53.859918  180006 system_pods.go:59] 1 kube-system pods found
	I0813 20:40:53.859955  180006 system_pods.go:61] "storage-provisioner" [c0ef813b-fc76-11eb-abf8-02421f46f1af] Pending
	I0813 20:40:53.859971  180006 retry.go:31] will retry after 381.329545ms: only 1 pod(s) have shown up
	I0813 20:40:52.175814  185014 out.go:177] * Updating the running docker "pause-20210813203929-13784" container ...
	I0813 20:40:52.175851  185014 machine.go:88] provisioning docker machine ...
	I0813 20:40:52.175883  185014 ubuntu.go:169] provisioning hostname "pause-20210813203929-13784"
	I0813 20:40:52.175946  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:52.215679  185014 main.go:130] libmachine: Using SSH client type: native
	I0813 20:40:52.215877  185014 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32890 <nil> <nil>}
	I0813 20:40:52.215903  185014 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210813203929-13784 && echo "pause-20210813203929-13784" | sudo tee /etc/hostname
	I0813 20:40:52.358026  185014 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210813203929-13784
	
	I0813 20:40:52.358110  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:52.401255  185014 main.go:130] libmachine: Using SSH client type: native
	I0813 20:40:52.401441  185014 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32890 <nil> <nil>}
	I0813 20:40:52.401474  185014 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210813203929-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210813203929-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210813203929-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:40:52.533175  185014 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:40:52.533206  185014 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:40:52.533234  185014 ubuntu.go:177] setting up certificates
	I0813 20:40:52.533246  185014 provision.go:83] configureAuth start
	I0813 20:40:52.533294  185014 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210813203929-13784
	I0813 20:40:52.576977  185014 provision.go:138] copyHostCerts
	I0813 20:40:52.577055  185014 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:40:52.577072  185014 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:40:52.578355  185014 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:40:52.578483  185014 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:40:52.578501  185014 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:40:52.578532  185014 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:40:52.578609  185014 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:40:52.578623  185014 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:40:52.578650  185014 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:40:52.578717  185014 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.pause-20210813203929-13784 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210813203929-13784]
	I0813 20:40:52.675608  185014 provision.go:172] copyRemoteCerts
	I0813 20:40:52.675675  185014 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:40:52.675727  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:52.723181  185014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:40:52.817602  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:40:52.835254  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:40:52.850992  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:40:52.868775  185014 provision.go:86] duration metric: configureAuth took 335.515543ms
	I0813 20:40:52.868801  185014 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:40:52.868976  185014 config.go:177] Loaded profile config "pause-20210813203929-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:40:52.869117  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:52.918068  185014 main.go:130] libmachine: Using SSH client type: native
	I0813 20:40:52.918282  185014 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32890 <nil> <nil>}
	I0813 20:40:52.918309  185014 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:40:53.563086  185014 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:40:53.563122  185014 machine.go:91] provisioned docker machine in 1.387262575s
	I0813 20:40:53.563136  185014 start.go:267] post-start starting for "pause-20210813203929-13784" (driver="docker")
	I0813 20:40:53.563144  185014 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:40:53.563213  185014 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:40:53.563264  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:53.614430  185014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:40:53.706274  185014 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:40:53.709201  185014 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:40:53.709228  185014 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:40:53.709241  185014 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:40:53.709249  185014 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:40:53.709262  185014 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:40:53.709318  185014 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:40:53.709449  185014 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:40:53.709628  185014 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:40:53.717030  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:40:53.736369  185014 start.go:270] post-start completed in 173.214769ms
	I0813 20:40:53.736449  185014 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:40:53.736500  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:53.787683  185014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:40:53.877854  185014 fix.go:57] fixHost completed within 1.745966885s
	I0813 20:40:53.877875  185014 start.go:80] releasing machines lock for "pause-20210813203929-13784", held for 1.746015255s
	I0813 20:40:53.877957  185014 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210813203929-13784
	I0813 20:40:53.918305  185014 ssh_runner.go:149] Run: systemctl --version
	I0813 20:40:53.918362  185014 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:40:53.918366  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:53.918418  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:53.964392  185014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:40:53.964883  185014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:40:54.201633  185014 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:40:54.212394  185014 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:40:54.221637  185014 docker.go:153] disabling docker service ...
	I0813 20:40:54.221686  185014 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:40:54.230268  185014 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:40:54.239227  185014 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:40:54.359526  185014 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:40:54.492240  185014 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:40:54.501957  185014 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:40:54.514641  185014 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:40:54.522105  185014 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:40:54.522128  185014 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:40:54.529509  185014 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:40:54.535420  185014 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:40:54.535465  185014 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:40:54.542679  185014 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:40:54.549114  185014 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:40:54.667370  185014 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:40:54.678354  185014 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:40:54.678437  185014 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:40:54.681698  185014 start.go:413] Will wait 60s for crictl version
	I0813 20:40:54.681755  185014 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:40:54.708890  185014 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:40:54.708969  185014 ssh_runner.go:149] Run: crio --version
	I0813 20:40:54.772071  185014 ssh_runner.go:149] Run: crio --version
	I0813 20:40:54.840211  185014 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:40:54.840311  185014 cli_runner.go:115] Run: docker network inspect pause-20210813203929-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:40:54.882009  185014 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 20:40:54.885736  185014 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:40:54.885799  185014 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:40:54.916124  185014 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:40:54.916146  185014 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:40:54.916194  185014 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:40:54.940656  185014 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:40:54.940682  185014 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:40:54.940760  185014 ssh_runner.go:149] Run: crio config
	I0813 20:40:55.011701  185014 cni.go:93] Creating CNI manager for ""
	I0813 20:40:55.011728  185014 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:40:55.011743  185014 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:40:55.011760  185014 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813203929-13784 NodeName:pause-20210813203929-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:40:55.011926  185014 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210813203929-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:40:55.012040  185014 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-20210813203929-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813203929-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:40:55.012099  185014 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:40:55.019737  185014 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:40:55.019815  185014 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:40:55.026405  185014 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (557 bytes)
	I0813 20:40:55.038411  185014 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:40:55.050792  185014 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0813 20:40:55.062389  185014 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:40:55.065346  185014 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784 for IP: 192.168.49.2
	I0813 20:40:55.065399  185014 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:40:55.065423  185014 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:40:55.065543  185014 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/client.key
	I0813 20:40:55.065579  185014 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/apiserver.key.dd3b5fb2
	I0813 20:40:55.065604  185014 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/proxy-client.key
	I0813 20:40:55.065724  185014 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:40:55.065776  185014 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:40:55.065792  185014 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:40:55.065818  185014 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:40:55.065855  185014 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:40:55.065895  185014 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:40:55.065963  185014 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:40:55.067253  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:40:55.084656  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:40:55.100910  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:40:55.116429  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:40:55.133391  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:40:55.151885  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:40:55.167924  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:40:55.183515  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:40:55.199394  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:40:55.215699  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:40:55.231655  185014 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:40:55.247715  185014 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:40:55.260096  185014 ssh_runner.go:149] Run: openssl version
	I0813 20:40:55.265366  185014 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:40:55.273406  185014 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:40:55.276397  185014 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:40:55.276447  185014 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:40:55.281094  185014 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:40:55.287365  185014 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:40:55.294166  185014 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:40:55.297055  185014 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:40:55.297110  185014 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:40:55.301998  185014 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:40:55.308203  185014 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:40:55.315689  185014 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:40:55.318989  185014 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:40:55.319029  185014 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:40:55.324232  185014 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:40:55.333807  185014 kubeadm.go:390] StartCluster: {Name:pause-20210813203929-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813203929-13784 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:40:55.333911  185014 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:40:55.333958  185014 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:40:55.356990  185014 cri.go:76] found id: "f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5"
	I0813 20:40:55.357022  185014 cri.go:76] found id: "0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d"
	I0813 20:40:55.357026  185014 cri.go:76] found id: "15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e"
	I0813 20:40:55.357032  185014 cri.go:76] found id: "765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803"
	I0813 20:40:55.357035  185014 cri.go:76] found id: "ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662"
	I0813 20:40:55.357039  185014 cri.go:76] found id: "de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637"
	I0813 20:40:55.357043  185014 cri.go:76] found id: "0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f"
	I0813 20:40:55.357047  185014 cri.go:76] found id: ""
	I0813 20:40:55.357092  185014 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:40:55.392939  185014 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","pid":1319,"status":"running","bundle":"/run/containers/storage/overlay-containers/0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f/userdata","rootfs":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","created":"2021-08-13T20:39:55.733812047Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.502928921Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"k
ube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/containers/kube-apiserver/244c51
68\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kuberne
tes.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","pid":2173,"status":"running","bundle":"/run/containers/storage/overlay-containers/0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d/userdata","rootfs":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","created":"2021-08-13T20:40:24.257677146Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b0cd6686","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.
cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b0cd6686\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.097525291Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc33
9c7f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\
"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/containers/kindnet-cni/5f00273b\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/volumes/kubernetes.io~projected/kube-api-access-wjm59\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","kubernetes.io/config.seen":"2021-
08-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","pid":2103,"status":"running","bundle":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata","rootfs":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","created":"2021-08-13T20:40:23.941830088Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.522016337Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.ku
bernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.848566608Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-pjb6w","io.kubernetes.cri-o.Labels":"{\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"7cdcb64568\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-db
e323a2b35f/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-pjb6w\",\"uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SeccompProfilePath":"runtime/de
fault","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/shm","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","pid":2167,"status":"running","bundle":"/run/containers/storage/overlay-containers/15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e/userdata","rootfs":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","created":"2021-08-13T20:40:24.19384482Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.
hash":"6ea07f15","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6ea07f15\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.079504068Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
"io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-dbe323a2b35f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7
fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/containers/kube-proxy/93f08aa8\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03
-4939-a057-dbe323a2b35f/volumes/kubernetes.io~projected/kube-api-access-8mjqv\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","pid":2690,"status":"running","bundle":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata","rootfs":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe46b1f0d00ec0757ccb1ee5d1b565323/merged","created":"2021-08-13T20:40:49.101759063Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":
"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.983557664Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth9a8d7a44\",\"mac\":\"2a:cb:06:90:ab:63\"},{\"name\":\"eth0\",\"mac\":\"d6:75:db:96:9a:5d\",\"sandbox\":\"/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:48.952217174Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.HostNetwork":"false",
"io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-ts9sl\",\"uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe4
6b1f0d00ec0757ccb1ee5d1b565323/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","k8s-app":"kube
-dns","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","pid":2100,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata","rootfs":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","created":"2021-08-13T20:40:23.95809172Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.523876061Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3a829ab2057cc070db7b625eea9bac1158d09da50
66242b708533877fd257658","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.852094836Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-k8wlb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kindnet\",\"controller-revision-hash\":\"694b6fb659\",\"app\":\"kindnet\",\"tier\":\"node\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pod
s/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-k8wlb\",\"uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.
kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/shm","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","pid":1206,"status":"running","bundle":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata","rootfs":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","created":"2021-08-13T20:39:55.433788257Z","annotations":
{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"4ebf0a68eff661e9c135374acf699695\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967492810Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.272991516Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cr
i-o.KubeName":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813203929-13784\",\"uid\":\"4ebf0a68eff661e9c135374acf699695\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.c
ri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source"
:"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","pid":1180,"status":"running","bundle":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata","rootfs":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","created":"2021-08-13T20:39:55.397924075Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967490591Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"436db4ab234524
af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.265342067Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o
.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813203929-13784\",\"uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kube
rnetes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","pid":1187,"status":"running","bundle":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979d
fb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata","rootfs":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","created":"2021-08-13T20:39:55.39391506Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967491922Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.269717758Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNe
twork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"component\":\"kube-controller-manager\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813203929-13784\",\"uid\":\"13241a9162471f4b325d1046e0460e76\",\"namespace\":\"kube-sy
stem\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/shm","io.kubernetes.pod.name":"
kube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","pid":1346,"status":"running","bundle":"/run/containers/storage/overlay-containers/765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803/userdata","rootfs":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","created":"2021-08-13T20:39:55.793791351Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"58d4e8b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container
.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"58d4e8b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.570582977Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\
",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false",
"io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/containers/etcd/d93341cb\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-0
8-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","pid":1326,"status":"running","bundle":"/run/containers/storage/overlay-containers/de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637/userdata","rootfs":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","created":"2021-08-13T20:39:55.773726316Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubern
etes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.507405986Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.LogPat
h":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOn
ce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/containers/kube-controller-manager/2eb45d77\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_pat
h\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","pid":1190,"status":"running","bundle":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata","rootfs":"/var/lib
/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","created":"2021-08-13T20:39:55.407125139Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"4a6c9153825faff90e9c8767408e0ebc\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967469748Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.263344855Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubern
etes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813203929-13784\",\"uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d3447
20b2dad516f506ec5f5dac7b/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","k
ubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-08-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","pid":1337,"status":"running","bundle":"/run/containers/storage/overlay-containers/ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662/userdata","rootfs":"/var/lib/containers/storage/overlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","created":"2021-08-13T20:39:55.793789894Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.conta
iner.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.519029007Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kub
e-system\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.
cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/containers/kube-scheduler/d30fe10b\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.
TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","pid":2722,"status":"running","bundle":"/run/containers/storage/overlay-containers/f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5/userdata","rootfs":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece564d0c97a04c0b83841935e/merged","created":"2021-08-13T20:40:49.313790208Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"287a3d56","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.An
notations":"{\"io.kubernetes.container.hash\":\"287a3d56\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:49.16097913Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-
o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece564d0c97a04c0b83841935e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a
935ef01476eaf11","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/containers/coredns/88d76d02\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~projected/kube-api-access-mdnqp\",\"readonly\":
true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0813 20:40:55.393612  185014 cri.go:113] list returned 14 containers
	I0813 20:40:55.393629  185014 cri.go:116] container: {ID:0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f Status:running}
	I0813 20:40:55.393640  185014 cri.go:122] skipping {0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f running}: state = "running", want "paused"
	I0813 20:40:55.393658  185014 cri.go:116] container: {ID:0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d Status:running}
	I0813 20:40:55.393665  185014 cri.go:122] skipping {0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d running}: state = "running", want "paused"
	I0813 20:40:55.393670  185014 cri.go:116] container: {ID:125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 Status:running}
	I0813 20:40:55.393677  185014 cri.go:118] skipping 125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 - not in ps
	I0813 20:40:55.393682  185014 cri.go:116] container: {ID:15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e Status:running}
	I0813 20:40:55.393688  185014 cri.go:122] skipping {15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e running}: state = "running", want "paused"
	I0813 20:40:55.393693  185014 cri.go:116] container: {ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 Status:running}
	I0813 20:40:55.393701  185014 cri.go:118] skipping 32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 - not in ps
	I0813 20:40:55.393708  185014 cri.go:116] container: {ID:3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 Status:running}
	I0813 20:40:55.393713  185014 cri.go:118] skipping 3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 - not in ps
	I0813 20:40:55.393717  185014 cri.go:116] container: {ID:3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b Status:running}
	I0813 20:40:55.393723  185014 cri.go:118] skipping 3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b - not in ps
	I0813 20:40:55.393726  185014 cri.go:116] container: {ID:436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff Status:running}
	I0813 20:40:55.393731  185014 cri.go:118] skipping 436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff - not in ps
	I0813 20:40:55.393736  185014 cri.go:116] container: {ID:4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 Status:running}
	I0813 20:40:55.393741  185014 cri.go:118] skipping 4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 - not in ps
	I0813 20:40:55.393744  185014 cri.go:116] container: {ID:765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803 Status:running}
	I0813 20:40:55.393748  185014 cri.go:122] skipping {765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803 running}: state = "running", want "paused"
	I0813 20:40:55.393755  185014 cri.go:116] container: {ID:de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637 Status:running}
	I0813 20:40:55.393760  185014 cri.go:122] skipping {de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637 running}: state = "running", want "paused"
	I0813 20:40:55.393766  185014 cri.go:116] container: {ID:eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f Status:running}
	I0813 20:40:55.393770  185014 cri.go:118] skipping eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f - not in ps
	I0813 20:40:55.393777  185014 cri.go:116] container: {ID:ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662 Status:running}
	I0813 20:40:55.393782  185014 cri.go:122] skipping {ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662 running}: state = "running", want "paused"
	I0813 20:40:55.393788  185014 cri.go:116] container: {ID:f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5 Status:running}
	I0813 20:40:55.393792  185014 cri.go:122] skipping {f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5 running}: state = "running", want "paused"
	I0813 20:40:55.393835  185014 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:40:55.400642  185014 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:40:55.400663  185014 kubeadm.go:600] restartCluster start
	I0813 20:40:55.400704  185014 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:40:55.406745  185014 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:40:55.407598  185014 kubeconfig.go:93] found "pause-20210813203929-13784" server: "https://192.168.49.2:8443"
	I0813 20:40:55.408340  185014 kapi.go:59] client config for pause-20210813203929-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/client
.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:40:55.410001  185014 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:40:55.416377  185014 api_server.go:164] Checking apiserver status ...
	I0813 20:40:55.416431  185014 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:40:55.434528  185014 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1319/cgroup
	I0813 20:40:55.441654  185014 api_server.go:180] apiserver freezer: "3:freezer:/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/system.slice/crio-0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f.scope"
	I0813 20:40:55.441722  185014 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/system.slice/crio-0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f.scope/freezer.state
	I0813 20:40:55.447745  185014 api_server.go:202] freezer state: "THAWED"
	I0813 20:40:55.447773  185014 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:40:55.452523  185014 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:40:55.466422  185014 system_pods.go:86] 7 kube-system pods found
	I0813 20:40:55.466454  185014 system_pods.go:89] "coredns-558bd4d5db-ts9sl" [da06b52c-7664-4a7e-98ae-ea1e61dc5560] Running
	I0813 20:40:55.466461  185014 system_pods.go:89] "etcd-pause-20210813203929-13784" [29fe51d9-c84b-422d-89a7-4cf9888c82be] Running
	I0813 20:40:55.466469  185014 system_pods.go:89] "kindnet-k8wlb" [199ebbdb-e768-4153-98da-db0adc339c7f] Running
	I0813 20:40:55.466479  185014 system_pods.go:89] "kube-apiserver-pause-20210813203929-13784" [43777af7-aff4-4c26-96d5-c69b294359b7] Running
	I0813 20:40:55.466485  185014 system_pods.go:89] "kube-controller-manager-pause-20210813203929-13784" [56ebd739-5411-4dde-9dd9-60f9732ce98b] Running
	I0813 20:40:55.466493  185014 system_pods.go:89] "kube-proxy-pjb6w" [5b9ca7fa-6b03-4939-a057-dbe323a2b35f] Running
	I0813 20:40:55.466498  185014 system_pods.go:89] "kube-scheduler-pause-20210813203929-13784" [b34dcf70-a330-4910-8163-dc3735492c35] Running
	I0813 20:40:55.467412  185014 api_server.go:139] control plane version: v1.21.3
	I0813 20:40:55.467434  185014 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2
	I0813 20:40:55.467446  185014 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:40:55.467452  185014 kubeadm.go:604] restartCluster took 66.784164ms
	I0813 20:40:55.467459  185014 kubeadm.go:392] StartCluster complete in 133.660732ms
	I0813 20:40:55.467475  185014 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:40:55.467569  185014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:40:55.469112  185014 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:40:55.470367  185014 kapi.go:59] client config for pause-20210813203929-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/client
.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:40:55.474111  185014 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813203929-13784" rescaled to 1
	I0813 20:40:55.474164  185014 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:40:55.474179  185014 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:40:55.476464  185014 out.go:177] * Verifying Kubernetes components...
	I0813 20:40:55.474249  185014 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:40:55.476584  185014 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813203929-13784"
	I0813 20:40:55.476604  185014 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813203929-13784"
	W0813 20:40:55.476612  185014 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:40:55.474339  185014 config.go:177] Loaded profile config "pause-20210813203929-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:40:55.476520  185014 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:40:55.476647  185014 host.go:66] Checking if "pause-20210813203929-13784" exists ...
	I0813 20:40:55.476653  185014 addons.go:59] Setting default-storageclass=true in profile "pause-20210813203929-13784"
	I0813 20:40:55.476675  185014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813203929-13784"
	I0813 20:40:55.477013  185014 cli_runner.go:115] Run: docker container inspect pause-20210813203929-13784 --format={{.State.Status}}
	I0813 20:40:55.477157  185014 cli_runner.go:115] Run: docker container inspect pause-20210813203929-13784 --format={{.State.Status}}
	I0813 20:40:55.528985  185014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:40:55.529111  185014 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:40:55.529125  185014 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:40:55.529181  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:55.530871  185014 kapi.go:59] client config for pause-20210813203929-13784: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813203929-13784/client
.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:40:55.535589  185014 addons.go:135] Setting addon default-storageclass=true in "pause-20210813203929-13784"
	W0813 20:40:55.535614  185014 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:40:55.535645  185014 host.go:66] Checking if "pause-20210813203929-13784" exists ...
	I0813 20:40:55.536193  185014 cli_runner.go:115] Run: docker container inspect pause-20210813203929-13784 --format={{.State.Status}}
	I0813 20:40:55.555121  185014 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:40:55.555112  185014 node_ready.go:35] waiting up to 6m0s for node "pause-20210813203929-13784" to be "Ready" ...
	I0813 20:40:55.558851  185014 node_ready.go:49] node "pause-20210813203929-13784" has status "Ready":"True"
	I0813 20:40:55.558876  185014 node_ready.go:38] duration metric: took 3.722082ms waiting for node "pause-20210813203929-13784" to be "Ready" ...
	I0813 20:40:55.558888  185014 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:40:55.563204  185014 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-ts9sl" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.577381  185014 pod_ready.go:92] pod "coredns-558bd4d5db-ts9sl" in "kube-system" namespace has status "Ready":"True"
	I0813 20:40:55.577411  185014 pod_ready.go:81] duration metric: took 14.183962ms waiting for pod "coredns-558bd4d5db-ts9sl" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.577424  185014 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813203929-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.579477  185014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:40:55.582002  185014 pod_ready.go:92] pod "etcd-pause-20210813203929-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:40:55.582023  185014 pod_ready.go:81] duration metric: took 4.59087ms waiting for pod "etcd-pause-20210813203929-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.582039  185014 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813203929-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.586431  185014 pod_ready.go:92] pod "kube-apiserver-pause-20210813203929-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:40:55.586445  185014 pod_ready.go:81] duration metric: took 4.397366ms waiting for pod "kube-apiserver-pause-20210813203929-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.586453  185014 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813203929-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.591407  185014 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:40:55.591428  185014 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:40:55.591481  185014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:40:55.634281  185014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:40:55.655202  185014 pod_ready.go:92] pod "kube-controller-manager-pause-20210813203929-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:40:55.655221  185014 pod_ready.go:81] duration metric: took 68.761235ms waiting for pod "kube-controller-manager-pause-20210813203929-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.655232  185014 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pjb6w" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:55.679585  185014 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:40:55.732519  185014 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:40:56.009794  185014 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:40:56.009820  185014 addons.go:344] enableAddons completed in 535.576128ms
	I0813 20:40:56.055692  185014 pod_ready.go:92] pod "kube-proxy-pjb6w" in "kube-system" namespace has status "Ready":"True"
	I0813 20:40:56.055720  185014 pod_ready.go:81] duration metric: took 400.481122ms waiting for pod "kube-proxy-pjb6w" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:56.055734  185014 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813203929-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:56.456496  185014 pod_ready.go:92] pod "kube-scheduler-pause-20210813203929-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:40:56.456519  185014 pod_ready.go:81] duration metric: took 400.774125ms waiting for pod "kube-scheduler-pause-20210813203929-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:40:56.456529  185014 pod_ready.go:38] duration metric: took 897.628096ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:40:56.456550  185014 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:40:56.456596  185014 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:40:56.478510  185014 api_server.go:70] duration metric: took 1.00431635s to wait for apiserver process to appear ...
	I0813 20:40:56.478541  185014 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:40:56.478553  185014 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:40:56.483445  185014 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:40:56.484402  185014 api_server.go:139] control plane version: v1.21.3
	I0813 20:40:56.484421  185014 api_server.go:129] duration metric: took 5.874322ms to wait for apiserver health ...
	I0813 20:40:56.484431  185014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:40:56.658144  185014 system_pods.go:59] 8 kube-system pods found
	I0813 20:40:56.658174  185014 system_pods.go:61] "coredns-558bd4d5db-ts9sl" [da06b52c-7664-4a7e-98ae-ea1e61dc5560] Running
	I0813 20:40:56.658179  185014 system_pods.go:61] "etcd-pause-20210813203929-13784" [29fe51d9-c84b-422d-89a7-4cf9888c82be] Running
	I0813 20:40:56.658183  185014 system_pods.go:61] "kindnet-k8wlb" [199ebbdb-e768-4153-98da-db0adc339c7f] Running
	I0813 20:40:56.658187  185014 system_pods.go:61] "kube-apiserver-pause-20210813203929-13784" [43777af7-aff4-4c26-96d5-c69b294359b7] Running
	I0813 20:40:56.658191  185014 system_pods.go:61] "kube-controller-manager-pause-20210813203929-13784" [56ebd739-5411-4dde-9dd9-60f9732ce98b] Running
	I0813 20:40:56.658195  185014 system_pods.go:61] "kube-proxy-pjb6w" [5b9ca7fa-6b03-4939-a057-dbe323a2b35f] Running
	I0813 20:40:56.658200  185014 system_pods.go:61] "kube-scheduler-pause-20210813203929-13784" [b34dcf70-a330-4910-8163-dc3735492c35] Running
	I0813 20:40:56.658212  185014 system_pods.go:61] "storage-provisioner" [5bba0aa8-5d05-4858-b5af-a2456279867c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:40:56.658224  185014 system_pods.go:74] duration metric: took 173.786683ms to wait for pod list to return data ...
	I0813 20:40:56.658237  185014 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:40:56.855984  185014 default_sa.go:45] found service account: "default"
	I0813 20:40:56.856008  185014 default_sa.go:55] duration metric: took 197.761151ms for default service account to be created ...
	I0813 20:40:56.856017  185014 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:40:57.059219  185014 system_pods.go:86] 8 kube-system pods found
	I0813 20:40:57.059251  185014 system_pods.go:89] "coredns-558bd4d5db-ts9sl" [da06b52c-7664-4a7e-98ae-ea1e61dc5560] Running
	I0813 20:40:57.059257  185014 system_pods.go:89] "etcd-pause-20210813203929-13784" [29fe51d9-c84b-422d-89a7-4cf9888c82be] Running
	I0813 20:40:57.059262  185014 system_pods.go:89] "kindnet-k8wlb" [199ebbdb-e768-4153-98da-db0adc339c7f] Running
	I0813 20:40:57.059267  185014 system_pods.go:89] "kube-apiserver-pause-20210813203929-13784" [43777af7-aff4-4c26-96d5-c69b294359b7] Running
	I0813 20:40:57.059274  185014 system_pods.go:89] "kube-controller-manager-pause-20210813203929-13784" [56ebd739-5411-4dde-9dd9-60f9732ce98b] Running
	I0813 20:40:57.059279  185014 system_pods.go:89] "kube-proxy-pjb6w" [5b9ca7fa-6b03-4939-a057-dbe323a2b35f] Running
	I0813 20:40:57.059286  185014 system_pods.go:89] "kube-scheduler-pause-20210813203929-13784" [b34dcf70-a330-4910-8163-dc3735492c35] Running
	I0813 20:40:57.059297  185014 system_pods.go:89] "storage-provisioner" [5bba0aa8-5d05-4858-b5af-a2456279867c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:40:57.059311  185014 system_pods.go:126] duration metric: took 203.288902ms to wait for k8s-apps to be running ...
	I0813 20:40:57.059321  185014 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:40:57.059372  185014 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:40:57.069131  185014 system_svc.go:56] duration metric: took 9.80404ms WaitForService to wait for kubelet.
	I0813 20:40:57.069158  185014 kubeadm.go:547] duration metric: took 1.594969629s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:40:57.069185  185014 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:40:57.259466  185014 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:40:57.259500  185014 node_conditions.go:123] node cpu capacity is 8
	I0813 20:40:57.259515  185014 node_conditions.go:105] duration metric: took 190.324677ms to run NodePressure ...
	I0813 20:40:57.259528  185014 start.go:231] waiting for startup goroutines ...
	I0813 20:40:57.315919  185014 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:40:57.317934  185014 out.go:177] * Done! kubectl is now configured to use "pause-20210813203929-13784" cluster and "default" namespace by default
	I0813 20:40:53.866427  180006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:40:53.866452  180006 addons.go:344] enableAddons completed in 638.093758ms
	I0813 20:40:54.244419  180006 system_pods.go:59] 1 kube-system pods found
	I0813 20:40:54.244444  180006 system_pods.go:61] "storage-provisioner" [c0ef813b-fc76-11eb-abf8-02421f46f1af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:40:54.244456  180006 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
	I0813 20:40:54.670369  180006 system_pods.go:59] 1 kube-system pods found
	I0813 20:40:54.670403  180006 system_pods.go:61] "storage-provisioner" [c0ef813b-fc76-11eb-abf8-02421f46f1af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:40:54.670420  180006 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
	I0813 20:40:55.146515  180006 system_pods.go:59] 1 kube-system pods found
	I0813 20:40:55.146546  180006 system_pods.go:61] "storage-provisioner" [c0ef813b-fc76-11eb-abf8-02421f46f1af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:40:55.146560  180006 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
	I0813 20:40:55.736906  180006 system_pods.go:59] 1 kube-system pods found
	I0813 20:40:55.736943  180006 system_pods.go:61] "storage-provisioner" [c0ef813b-fc76-11eb-abf8-02421f46f1af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:40:55.736961  180006 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
	I0813 20:40:56.574177  180006 system_pods.go:59] 1 kube-system pods found
	I0813 20:40:56.574208  180006 system_pods.go:61] "storage-provisioner" [c0ef813b-fc76-11eb-abf8-02421f46f1af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:40:56.574222  180006 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
	I0813 20:40:57.323652  180006 system_pods.go:59] 1 kube-system pods found
	I0813 20:40:57.323684  180006 system_pods.go:61] "storage-provisioner" [c0ef813b-fc76-11eb-abf8-02421f46f1af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:40:57.323703  180006 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up
	I0813 20:40:56.329635  184062 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}
	W0813 20:40:56.370692  184062 cli_runner.go:162] docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}} returned with exit code 1
	I0813 20:40:56.370760  184062 oci.go:644] temporary error verifying shutdown: unknown state "missing-upgrade-20210813203846-13784": docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813203846-13784
	I0813 20:40:56.370775  184062 oci.go:646] temporary error: container missing-upgrade-20210813203846-13784 status is  but expect it to be exited
	I0813 20:40:56.370803  184062 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-20210813203846-13784": docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20210813203846-13784
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:00 UTC. --
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.135173785Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.136754653Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139480430Z" level=info msg="Conmon does support the --sync option"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139554604Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139563583Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.144516509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.147006953Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.149207086Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160317327Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160348934Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558550091Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-ts9sl Namespace:kube-system ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 NetNS:/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558791773Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 13 20:40:53 pause-20210813203929-13784 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.306861254Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.450483242Z" level=info msg="Ran pod sandbox c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 with infra container: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.451659183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452327272Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452980775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.453580166Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.454314289Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466027097Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/passwd: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466066274Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/group: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.662676174Z" level=info msg="Created container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.663296728Z" level=info msg="Starting container: 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.673744233Z" level=info msg="Started container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	8422317486aff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       0                   c9be4b40ae287
	f5e960ccbf41e       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   10 seconds ago       Running             coredns                   0                   32623516945f8
	0e66c2b5613f5       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   35 seconds ago       Running             kindnet-cni               0                   3a829ab2057cc
	15fb32d86d158       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   35 seconds ago       Running             kube-proxy                0                   125d82aa8b508
	765e30beb45ae       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   eae2bc9a9df7c
	ecf109c279e47       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   3c556f4397e88
	de897ce9eab3c       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   4483c604f9ed1
	0a93b9e0c15af       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   436db4ab23452
	
	* 
	* ==> coredns [f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.138700] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.390684] cgroup: cgroup2: unknown option "nsdelegate"
	[  +2.362662] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[  +6.305228] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.592043] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.311286] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a d3 6c 1f a1 fb 08 06        ........l.....
	[Aug13 20:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth638a0651
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 5b 20 34 63 04 08 06        .......[ 4c...
	[ +15.906177] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.377154] cgroup: cgroup2: unknown option "nsdelegate"
	[  +4.439794] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.000008] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.270961] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 1d 9a 5f 02 4e 08 06        ......j.._.N..
	[ +14.030429] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9a8d7a44
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d6 75 db 96 9a 5d 08 06        .......u...]..
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.579695] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803] <==
	* 2021-08-13 20:39:56.508638 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-13 20:40:14.023652 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" " with result "range_response_count:1 size:4260" took too long (353.715245ms) to execute
	2021-08-13 20:40:14.023710 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (385.087564ms) to execute
	2021-08-13 20:40:14.023730 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:0 size:5" took too long (742.628324ms) to execute
	2021-08-13 20:40:14.023825 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" " with result "range_response_count:1 size:210" took too long (742.874769ms) to execute
	2021-08-13 20:40:15.423385 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 20:40:15.593861 W | wal: sync duration of 1.564336648s, expected less than 1s
	2021-08-13 20:40:15.594635 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.454395348s) to execute
	2021-08-13 20:40:16.242009 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:1 size:210" took too long (641.602297ms) to execute
	2021-08-13 20:40:16.242033 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:0 size:5" took too long (640.465675ms) to execute
	2021-08-13 20:40:16.242101 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" " with result "range_response_count:1 size:4260" took too long (641.294407ms) to execute
	2021-08-13 20:40:17.051976 W | etcdserver: request "header:<ID:8128006947642344446 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" mod_revision:289 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" value_size:3977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" > >>" with result "size:16" took too long (432.247936ms) to execute
	2021-08-13 20:40:18.660824 W | wal: sync duration of 1.725448216s, expected less than 1s
	2021-08-13 20:40:18.929173 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.00007561s) to execute
	WARNING: 2021/08/13 20:40:18 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-13 20:40:19.784013 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (3.144407226s) to execute
	2021-08-13 20:40:19.784107 W | etcdserver: request "header:<ID:8128006947642344449 > lease_revoke:<id:70cc7b413e210346>" with result "size:29" took too long (1.12306053s) to execute
	2021-08-13 20:40:19.784416 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (2.727675957s) to execute
	2021-08-13 20:40:19.784657 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (2.728835843s) to execute
	2021-08-13 20:40:19.790810 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813203929-13784\" " with result "range_response_count:1 size:3976" took too long (848.789811ms) to execute
	2021-08-13 20:40:24.423766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:25.164612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:35.163828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:45.164164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:55.164376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:41:14 up  1:23,  0 users,  load average: 5.66, 3.32, 1.82
	Linux pause-20210813203929-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f] <==
	* I0813 20:40:19.785459       1 trace.go:205] Trace[528132474]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:18.938) (total time: 847ms):
	Trace[528132474]: ---"Object stored in database" 846ms (20:40:00.785)
	Trace[528132474]: [847.010131ms] [847.010131ms] END
	I0813 20:40:19.785471       1 trace.go:205] Trace[572814057]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813203929-13784,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:19.248) (total time: 537ms):
	Trace[572814057]: ---"Object stored in database" 536ms (20:40:00.785)
	Trace[572814057]: [537.086569ms] [537.086569ms] END
	I0813 20:40:19.785540       1 trace.go:205] Trace[1356449652]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/replicaset-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/tokens-controller,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:40:17.055) (total time: 2730ms):
	Trace[1356449652]: ---"About to write a response" 2729ms (20:40:00.785)
	Trace[1356449652]: [2.730061569s] [2.730061569s] END
	I0813 20:40:19.786359       1 trace.go:205] Trace[769020975]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:16.639) (total time: 3147ms):
	Trace[769020975]: [3.147238596s] [3.147238596s] END
	I0813 20:40:19.787020       1 trace.go:205] Trace[847010192]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/node-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/kube-controller-manager,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:40:17.056) (total time: 2730ms):
	Trace[847010192]: ---"About to write a response" 2730ms (20:40:00.786)
	Trace[847010192]: [2.730588631s] [2.730588631s] END
	I0813 20:40:19.787603       1 trace.go:205] Trace[2107042432]: "Patch" url:/api/v1/nodes/pause-20210813203929-13784/status,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:19.069) (total time: 718ms):
	Trace[2107042432]: ---"Object stored in database" 714ms (20:40:00.785)
	Trace[2107042432]: [718.430347ms] [718.430347ms] END
	I0813 20:40:19.793912       1 trace.go:205] Trace[1263737565]: "Get" url:/api/v1/namespaces/kube-system/pods/etcd-pause-20210813203929-13784,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:18.941) (total time: 852ms):
	Trace[1263737565]: ---"About to write a response" 852ms (20:40:00.793)
	Trace[1263737565]: [852.227308ms] [852.227308ms] END
	I0813 20:40:23.498807       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:40:23.848939       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:40:39.165407       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:40:39.165451       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:40:39.165458       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637] <==
	* I0813 20:40:22.945115       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0813 20:40:22.946089       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0813 20:40:22.946088       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:40:22.946247       1 shared_informer.go:247] Caches are synced for cronjob 
	I0813 20:40:22.946455       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:40:22.952128       1 shared_informer.go:247] Caches are synced for namespace 
	I0813 20:40:23.005106       1 shared_informer.go:247] Caches are synced for endpoint 
	I0813 20:40:23.145003       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:40:23.145579       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:23.150452       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 20:40:23.168777       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:40:23.168800       1 disruption.go:371] Sending events to api server.
	I0813 20:40:23.181021       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:23.507939       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pjb6w"
	I0813 20:40:23.517415       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k8wlb"
	E0813 20:40:23.548254       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"1eae9bdb-1aea-4c39-a2f9-a9df683878b4", ResourceVersion:"265", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484003, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00175a8d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00175a8e8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019c5800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a918), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a930), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5820)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5860)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00048fc20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001baacb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000c8a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000ae2ff0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001baad00)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:40:23.661112       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.661142       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:23.667920       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.851329       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:23.871778       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:23.965355       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ncl4r"
	I0813 20:40:23.980674       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ts9sl"
	I0813 20:40:24.022166       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-ncl4r"
	
	* 
	* ==> kube-proxy [15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e] <==
	* I0813 20:40:24.404639       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:40:24.404699       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:40:24.404733       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:24.484285       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:24.484325       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:24.484338       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:24.484352       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:24.484722       1 server.go:643] Version: v1.21.3
	I0813 20:40:24.485344       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:24.485368       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:40:24.485394       1 config.go:315] Starting service config controller
	I0813 20:40:24.485398       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 20:40:24.494950       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:24.496346       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:24.585594       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:24.585676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662] <==
	* I0813 20:40:00.664885       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:40:00.664904       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:40:00.665208       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:40:00.668026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:00.668147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:00.668368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.668713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:00.669360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:00.669416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669423       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:00.669448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.680678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:00.680761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:00.680779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:01.515088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:01.520824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:01.529811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:01.530681       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:01.534900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:01.603579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:01.619709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:02.065154       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:14 UTC. --
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:26.118723    1594 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/f8415df1-329a-4761-8b93-08dab691c8a1/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.118873    1594 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8415df1-329a-4761-8b93-08dab691c8a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f8415df1-329a-4761-8b93-08dab691c8a1" (UID: "f8415df1-329a-4761-8b93-08dab691c8a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.141958    1594 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8415df1-329a-4761-8b93-08dab691c8a1-kube-api-access-c659l" (OuterVolumeSpecName: "kube-api-access-c659l") pod "f8415df1-329a-4761-8b93-08dab691c8a1" (UID: "f8415df1-329a-4761-8b93-08dab691c8a1"). InnerVolumeSpecName "kube-api-access-c659l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.219158    1594 reconciler.go:319] "Volume detached for volume \"kube-api-access-c659l\" (UniqueName: \"kubernetes.io/projected/f8415df1-329a-4761-8b93-08dab691c8a1-kube-api-access-c659l\") on node \"pause-20210813203929-13784\" DevicePath \"\""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.219197    1594 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8415df1-329a-4761-8b93-08dab691c8a1-config-volume\") on node \"pause-20210813203929-13784\" DevicePath \"\""
	Aug 13 20:40:29 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:29.197345    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940415    1594 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940484    1594 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ts9sl"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940509    1594 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ts9sl"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940578    1594 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-ts9sl_kube-system(da06b52c-7664-4a7e-98ae-ea1e61dc5560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-ts9sl_kube-system(da06b52c-7664-4a7e-98ae-ea1e61dc5560)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-ts9sl" podUID=da06b52c-7664-4a7e-98ae-ea1e61dc5560
	Aug 13 20:40:35 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:35.167096    1594 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ncl4r_kube-system_f8415df1-329a-4761-8b93-08dab691c8a1_0(c1acc36c8a211e5c0e2add67b80fda71e4e6a48ceab697fd761bcf3b536fd5f4): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 20:40:35 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:35.167217    1594 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ncl4r_kube-system_f8415df1-329a-4761-8b93-08dab691c8a1_0(c1acc36c8a211e5c0e2add67b80fda71e4e6a48ceab697fd761bcf3b536fd5f4): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ncl4r"
	Aug 13 20:40:39 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:39.269570    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:49 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:49.324415    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:53.056122    1594 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:53.056137    1594 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057341    1594 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057421    1594 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057445    1594 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.005445    1594 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.190781    1594 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5bba0aa8-5d05-4858-b5af-a2456279867c-tmp\") pod \"storage-provisioner\" (UID: \"5bba0aa8-5d05-4858-b5af-a2456279867c\") "
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.190885    1594 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qszs7\" (UniqueName: \"kubernetes.io/projected/5bba0aa8-5d05-4858-b5af-a2456279867c-kube-api-access-qszs7\") pod \"storage-provisioner\" (UID: \"5bba0aa8-5d05-4858-b5af-a2456279867c\") "
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b] <==
	* 
	goroutine 111 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc0004b4b90, 0x3)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0004b4b80)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00013a4e0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000440c80, 0x18e5530, 0xc000046100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005e7200)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005e7200, 0x18b3d60, 0xc000272000, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005e7200, 0x3b9aca00, 0x0, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0005e7200, 0x3b9aca00, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:41:14.706680  187135 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-13784
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860",
	        "Created": "2021-08-13T20:39:31.372712772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:31.872578968Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hosts",
	        "LogPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860-json.log",
	        "Name": "/pause-20210813203929-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/merged",
	                "UpperDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/diff",
	                "WorkDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-13784",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a821792d507c6dabf086e5652e018123e85e4b030464132aafdef8bc15a9d200",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a821792d507c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ce53ded591b3"
	                    ],
	                    "NetworkID": "a8af35fe90fb5b850638bd77da889b067a8390ebee6680d76e896390e70a0e9e",
	                    "EndpointID": "0b310d5a393fb3e0184bcf23f10e5a3746cbeb23b4b202e9e5c6f681f15cdcfa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784: exit status 2 (406.723544ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25
E0813 20:41:16.135692   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25: exit status 110 (11.384408563s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:29 UTC | Fri, 13 Aug 2021 20:37:03 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker          |                                           |         |         |                               |                               |
	|         |  --container-runtime=crio                 |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3              |                                           |         |         |                               |                               |
	| ssh     | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:03 UTC | Fri, 13 Aug 2021 20:37:03 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	|         | -- sudo crictl image ls                   |                                           |         |         |                               |                               |
	| -p      | test-preload-20210813203431-13784         | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:04 UTC | Fri, 13 Aug 2021 20:37:05 UTC |
	|         | logs -n 25                                |                                           |         |         |                               |                               |
	| delete  | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:05 UTC | Fri, 13 Aug 2021 20:37:10 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	| start   | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:10 UTC | Fri, 13 Aug 2021 20:37:47 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --memory=2048 --driver=docker             |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:48 UTC | Fri, 13 Aug 2021 20:37:48 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --cancel-scheduled                        |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:00 UTC | Fri, 13 Aug 2021 20:38:26 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --schedule 5s                             |                                           |         |         |                               |                               |
	| delete  | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:28 UTC | Fri, 13 Aug 2021 20:38:33 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	| delete  | -p                                        | insufficient-storage-20210813203833-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:40 UTC | Fri, 13 Aug 2021 20:38:46 UTC |
	|         | insufficient-storage-20210813203833-13784 |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	|         | --memory=2048 --force-systemd             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:49 UTC | Fri, 13 Aug 2021 20:39:31 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=5 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:31 UTC | Fri, 13 Aug 2021 20:39:35 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	| start   | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:40:06 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                    |                                           |         |         |                               |                               |
	|         | --memory=2048 --wait=true                 |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:06 UTC | Fri, 13 Aug 2021 20:40:09 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	| delete  | -p                                        | kubenet-20210813204009-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:09 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | kubenet-20210813204009-13784              |                                           |         |         |                               |                               |
	| delete  | -p                                        | flannel-20210813204010-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:10 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | flannel-20210813204010-13784              |                                           |         |         |                               |                               |
	| delete  | -p false-20210813204010-13784             | false-20210813204010-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:11 UTC | Fri, 13 Aug 2021 20:40:11 UTC |
	| start   | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:35 UTC | Fri, 13 Aug 2021 20:40:23 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                 |                                           |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15             |                                           |         |         |                               |                               |
	|         | --apiserver-names=localhost               |                                           |         |         |                               |                               |
	|         | --apiserver-names=www.google.com          |                                           |         |         |                               |                               |
	|         | --apiserver-port=8555                     |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| -p      | cert-options-20210813203935-13784         | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:23 UTC | Fri, 13 Aug 2021 20:40:24 UTC |
	|         | ssh openssl x509 -text -noout -in         |                                           |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:24 UTC | Fri, 13 Aug 2021 20:40:27 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --install-addons=false                    |                                           |         |         |                               |                               |
	|         | --wait=all --driver=docker                |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:57 UTC |
	|         | --alsologtostderr                         |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:27 UTC | Fri, 13 Aug 2021 20:41:10 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	|         | --memory=2200                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0              |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:10 UTC | Fri, 13 Aug 2021 20:41:13 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:41:13
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:41:13.216170  190586 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:41:13.216266  190586 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:13.216275  190586 out.go:311] Setting ErrFile to fd 2...
	I0813 20:41:13.216278  190586 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:13.216397  190586 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:41:13.216623  190586 out.go:305] Setting JSON to false
	I0813 20:41:13.253752  190586 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5036,"bootTime":1628882237,"procs":259,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:41:13.253885  190586 start.go:121] virtualization: kvm guest
	I0813 20:41:13.256490  190586 out.go:177] * [kubernetes-upgrade-20210813204027-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:41:13.256589  190586 notify.go:169] Checking for updates...
	I0813 20:41:13.259158  190586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:13.260508  190586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:41:13.261955  190586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:41:13.263282  190586 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:41:13.263774  190586 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204027-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:41:13.264169  190586 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:41:13.321386  190586 docker.go:132] docker version: linux-19.03.15
	I0813 20:41:13.321545  190586 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:13.416471  190586 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 20:41:13.36746279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:41:13.416564  190586 docker.go:244] overlay module found
	I0813 20:41:13.418464  190586 out.go:177] * Using the docker driver based on existing profile
	I0813 20:41:13.418492  190586 start.go:278] selected driver: docker
	I0813 20:41:13.418499  190586 start.go:751] validating driver "docker" against &{Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:13.418589  190586 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:41:13.418632  190586 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:13.418675  190586 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:41:13.420085  190586 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:13.420924  190586 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:13.517347  190586 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 20:41:13.458391048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:41:13.517548  190586 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:13.517602  190586 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:41:13.519735  190586 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:13.519825  190586 cni.go:93] Creating CNI manager for ""
	I0813 20:41:13.519837  190586 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:41:13.519852  190586 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:13.521530  190586 out.go:177] * Starting control plane node kubernetes-upgrade-20210813204027-13784 in cluster kubernetes-upgrade-20210813204027-13784
	I0813 20:41:13.521575  190586 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:41:13.522935  190586 out.go:177] * Pulling base image ...
	I0813 20:41:13.522964  190586 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:41:13.523002  190586 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:41:13.523019  190586 cache.go:56] Caching tarball of preloaded images
	I0813 20:41:13.523073  190586 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:41:13.523200  190586 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:41:13.523219  190586 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 20:41:13.523365  190586 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/config.json ...
	I0813 20:41:13.620212  190586 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:41:13.620245  190586 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:41:13.620270  190586 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:41:13.620330  190586 start.go:313] acquiring machines lock for kubernetes-upgrade-20210813204027-13784: {Name:mk867fd1b3701cb21737f832aa092309ed957057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:13.620455  190586 start.go:317] acquired machines lock for "kubernetes-upgrade-20210813204027-13784" in 93.039µs
	I0813 20:41:13.620490  190586 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:41:13.620503  190586 fix.go:55] fixHost starting: 
	I0813 20:41:13.620859  190586 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:41:13.660909  190586 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210813204027-13784: state=Stopped err=<nil>
	W0813 20:41:13.660964  190586 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:41:12.761026  184062 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:41:12.761059  184062 machine.go:91] provisioned docker machine in 1.026170522s
	I0813 20:41:12.761071  184062 start.go:267] post-start starting for "missing-upgrade-20210813203846-13784" (driver="docker")
	I0813 20:41:12.761079  184062 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:12.761143  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:12.761189  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:12.803241  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:12.892576  184062 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:12.895195  184062 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:12.895221  184062 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:12.895234  184062 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:12.895242  184062 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:41:12.895253  184062 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:12.895306  184062 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:12.895406  184062 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:41:12.895524  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:12.901776  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:12.917411  184062 start.go:270] post-start completed in 156.325269ms
	I0813 20:41:12.917471  184062 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:12.917549  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:12.960313  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.094246  184062 fix.go:57] fixHost completed within 29.67467079s
	I0813 20:41:13.094280  184062 start.go:80] releasing machines lock for "missing-upgrade-20210813203846-13784", held for 29.674767502s
	I0813 20:41:13.094368  184062 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20210813203846-13784
	I0813 20:41:13.147968  184062 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:13.148022  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:13.148030  184062 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:13.148132  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:13.195943  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.196112  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.285243  184062 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:41:13.437575  184062 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:41:13.446764  184062 docker.go:153] disabling docker service ...
	I0813 20:41:13.446817  184062 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:41:13.456218  184062 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:41:13.466857  184062 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:41:13.543702  184062 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:41:13.626332  184062 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:41:13.636155  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:41:13.648901  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0813 20:41:13.657710  184062 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:41:13.663842  184062 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:41:13.663894  184062 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:41:13.670882  184062 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:41:13.676999  184062 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:41:13.742334  184062 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:41:13.753589  184062 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:41:13.753659  184062 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:41:13.756898  184062 start.go:413] Will wait 60s for crictl version
	I0813 20:41:13.756951  184062 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:13.785844  184062 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:41:13.785924  184062 ssh_runner.go:149] Run: crio --version
	I0813 20:41:13.850759  184062 ssh_runner.go:149] Run: crio --version
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:16 UTC. --
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.135173785Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.136754653Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139480430Z" level=info msg="Conmon does support the --sync option"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139554604Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139563583Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.144516509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.147006953Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.149207086Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160317327Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160348934Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558550091Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-ts9sl Namespace:kube-system ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 NetNS:/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558791773Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 13 20:40:53 pause-20210813203929-13784 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.306861254Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.450483242Z" level=info msg="Ran pod sandbox c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 with infra container: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.451659183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452327272Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452980775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.453580166Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.454314289Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466027097Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/passwd: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466066274Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/group: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.662676174Z" level=info msg="Created container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.663296728Z" level=info msg="Starting container: 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.673744233Z" level=info msg="Started container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	8422317486aff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   19 seconds ago       Exited              storage-provisioner       0                   c9be4b40ae287
	f5e960ccbf41e       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   26 seconds ago       Running             coredns                   0                   32623516945f8
	0e66c2b5613f5       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   51 seconds ago       Running             kindnet-cni               0                   3a829ab2057cc
	15fb32d86d158       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   52 seconds ago       Running             kube-proxy                0                   125d82aa8b508
	765e30beb45ae       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   eae2bc9a9df7c
	ecf109c279e47       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   3c556f4397e88
	de897ce9eab3c       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   4483c604f9ed1
	0a93b9e0c15af       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   436db4ab23452
	
	* 
	* ==> coredns [f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.138700] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.390684] cgroup: cgroup2: unknown option "nsdelegate"
	[  +2.362662] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[  +6.305228] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.592043] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.311286] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a d3 6c 1f a1 fb 08 06        ........l.....
	[Aug13 20:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth638a0651
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 5b 20 34 63 04 08 06        .......[ 4c...
	[ +15.906177] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.377154] cgroup: cgroup2: unknown option "nsdelegate"
	[  +4.439794] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.000008] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.270961] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 1d 9a 5f 02 4e 08 06        ......j.._.N..
	[ +14.030429] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9a8d7a44
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d6 75 db 96 9a 5d 08 06        .......u...]..
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.579695] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803] <==
	* 2021-08-13 20:39:56.508638 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-13 20:40:14.023652 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" " with result "range_response_count:1 size:4260" took too long (353.715245ms) to execute
	2021-08-13 20:40:14.023710 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (385.087564ms) to execute
	2021-08-13 20:40:14.023730 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:0 size:5" took too long (742.628324ms) to execute
	2021-08-13 20:40:14.023825 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" " with result "range_response_count:1 size:210" took too long (742.874769ms) to execute
	2021-08-13 20:40:15.423385 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 20:40:15.593861 W | wal: sync duration of 1.564336648s, expected less than 1s
	2021-08-13 20:40:15.594635 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.454395348s) to execute
	2021-08-13 20:40:16.242009 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:1 size:210" took too long (641.602297ms) to execute
	2021-08-13 20:40:16.242033 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:0 size:5" took too long (640.465675ms) to execute
	2021-08-13 20:40:16.242101 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" " with result "range_response_count:1 size:4260" took too long (641.294407ms) to execute
	2021-08-13 20:40:17.051976 W | etcdserver: request "header:<ID:8128006947642344446 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" mod_revision:289 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" value_size:3977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" > >>" with result "size:16" took too long (432.247936ms) to execute
	2021-08-13 20:40:18.660824 W | wal: sync duration of 1.725448216s, expected less than 1s
	2021-08-13 20:40:18.929173 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.00007561s) to execute
	WARNING: 2021/08/13 20:40:18 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-13 20:40:19.784013 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (3.144407226s) to execute
	2021-08-13 20:40:19.784107 W | etcdserver: request "header:<ID:8128006947642344449 > lease_revoke:<id:70cc7b413e210346>" with result "size:29" took too long (1.12306053s) to execute
	2021-08-13 20:40:19.784416 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (2.727675957s) to execute
	2021-08-13 20:40:19.784657 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (2.728835843s) to execute
	2021-08-13 20:40:19.790810 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813203929-13784\" " with result "range_response_count:1 size:3976" took too long (848.789811ms) to execute
	2021-08-13 20:40:24.423766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:25.164612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:35.163828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:45.164164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:55.164376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:41:26 up  1:24,  0 users,  load average: 5.48, 3.36, 1.85
	Linux pause-20210813203929-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f] <==
	* I0813 20:40:19.785459       1 trace.go:205] Trace[528132474]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:18.938) (total time: 847ms):
	Trace[528132474]: ---"Object stored in database" 846ms (20:40:00.785)
	Trace[528132474]: [847.010131ms] [847.010131ms] END
	I0813 20:40:19.785471       1 trace.go:205] Trace[572814057]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813203929-13784,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:19.248) (total time: 537ms):
	Trace[572814057]: ---"Object stored in database" 536ms (20:40:00.785)
	Trace[572814057]: [537.086569ms] [537.086569ms] END
	I0813 20:40:19.785540       1 trace.go:205] Trace[1356449652]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/replicaset-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/tokens-controller,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:40:17.055) (total time: 2730ms):
	Trace[1356449652]: ---"About to write a response" 2729ms (20:40:00.785)
	Trace[1356449652]: [2.730061569s] [2.730061569s] END
	I0813 20:40:19.786359       1 trace.go:205] Trace[769020975]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:16.639) (total time: 3147ms):
	Trace[769020975]: [3.147238596s] [3.147238596s] END
	I0813 20:40:19.787020       1 trace.go:205] Trace[847010192]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/node-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/kube-controller-manager,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:40:17.056) (total time: 2730ms):
	Trace[847010192]: ---"About to write a response" 2730ms (20:40:00.786)
	Trace[847010192]: [2.730588631s] [2.730588631s] END
	I0813 20:40:19.787603       1 trace.go:205] Trace[2107042432]: "Patch" url:/api/v1/nodes/pause-20210813203929-13784/status,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:19.069) (total time: 718ms):
	Trace[2107042432]: ---"Object stored in database" 714ms (20:40:00.785)
	Trace[2107042432]: [718.430347ms] [718.430347ms] END
	I0813 20:40:19.793912       1 trace.go:205] Trace[1263737565]: "Get" url:/api/v1/namespaces/kube-system/pods/etcd-pause-20210813203929-13784,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:18.941) (total time: 852ms):
	Trace[1263737565]: ---"About to write a response" 852ms (20:40:00.793)
	Trace[1263737565]: [852.227308ms] [852.227308ms] END
	I0813 20:40:23.498807       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:40:23.848939       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:40:39.165407       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:40:39.165451       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:40:39.165458       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637] <==
	* I0813 20:40:22.945115       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0813 20:40:22.946089       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0813 20:40:22.946088       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:40:22.946247       1 shared_informer.go:247] Caches are synced for cronjob 
	I0813 20:40:22.946455       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:40:22.952128       1 shared_informer.go:247] Caches are synced for namespace 
	I0813 20:40:23.005106       1 shared_informer.go:247] Caches are synced for endpoint 
	I0813 20:40:23.145003       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:40:23.145579       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:23.150452       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 20:40:23.168777       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:40:23.168800       1 disruption.go:371] Sending events to api server.
	I0813 20:40:23.181021       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:23.507939       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pjb6w"
	I0813 20:40:23.517415       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k8wlb"
	E0813 20:40:23.548254       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"1eae9bdb-1aea-4c39-a2f9-a9df683878b4", ResourceVersion:"265", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484003, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00175a8d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00175a8e8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019c5800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a918), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a930), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5820)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5860)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00048fc20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001baacb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000c8a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000ae2ff0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001baad00)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:40:23.661112       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.661142       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:23.667920       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.851329       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:23.871778       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:23.965355       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ncl4r"
	I0813 20:40:23.980674       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ts9sl"
	I0813 20:40:24.022166       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-ncl4r"
	
	* 
	* ==> kube-proxy [15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e] <==
	* I0813 20:40:24.404639       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:40:24.404699       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:40:24.404733       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:24.484285       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:24.484325       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:24.484338       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:24.484352       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:24.484722       1 server.go:643] Version: v1.21.3
	I0813 20:40:24.485344       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:24.485368       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:40:24.485394       1 config.go:315] Starting service config controller
	I0813 20:40:24.485398       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 20:40:24.494950       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:24.496346       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:24.585594       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:24.585676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662] <==
	* I0813 20:40:00.664904       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:40:00.665208       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:40:00.668026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:00.668147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:00.668368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.668713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:00.669360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:00.669416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669423       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:00.669448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.680678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:00.680761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:00.680779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:01.515088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:01.520824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:01.529811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:01.530681       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:01.534900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:01.603579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:01.619709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:02.065154       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0813 20:41:17.023801       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:26 UTC. --
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:26.118723    1594 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/f8415df1-329a-4761-8b93-08dab691c8a1/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.118873    1594 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8415df1-329a-4761-8b93-08dab691c8a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f8415df1-329a-4761-8b93-08dab691c8a1" (UID: "f8415df1-329a-4761-8b93-08dab691c8a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.141958    1594 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8415df1-329a-4761-8b93-08dab691c8a1-kube-api-access-c659l" (OuterVolumeSpecName: "kube-api-access-c659l") pod "f8415df1-329a-4761-8b93-08dab691c8a1" (UID: "f8415df1-329a-4761-8b93-08dab691c8a1"). InnerVolumeSpecName "kube-api-access-c659l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.219158    1594 reconciler.go:319] "Volume detached for volume \"kube-api-access-c659l\" (UniqueName: \"kubernetes.io/projected/f8415df1-329a-4761-8b93-08dab691c8a1-kube-api-access-c659l\") on node \"pause-20210813203929-13784\" DevicePath \"\""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.219197    1594 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8415df1-329a-4761-8b93-08dab691c8a1-config-volume\") on node \"pause-20210813203929-13784\" DevicePath \"\""
	Aug 13 20:40:29 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:29.197345    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940415    1594 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940484    1594 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ts9sl"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940509    1594 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ts9sl"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940578    1594 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-ts9sl_kube-system(da06b52c-7664-4a7e-98ae-ea1e61dc5560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-ts9sl_kube-system(da06b52c-7664-4a7e-98ae-ea1e61dc5560)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-ts9sl" podUID=da06b52c-7664-4a7e-98ae-ea1e61dc5560
	Aug 13 20:40:35 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:35.167096    1594 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ncl4r_kube-system_f8415df1-329a-4761-8b93-08dab691c8a1_0(c1acc36c8a211e5c0e2add67b80fda71e4e6a48ceab697fd761bcf3b536fd5f4): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 20:40:35 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:35.167217    1594 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ncl4r_kube-system_f8415df1-329a-4761-8b93-08dab691c8a1_0(c1acc36c8a211e5c0e2add67b80fda71e4e6a48ceab697fd761bcf3b536fd5f4): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ncl4r"
	Aug 13 20:40:39 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:39.269570    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:49 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:49.324415    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:53.056122    1594 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:53.056137    1594 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057341    1594 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057421    1594 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057445    1594 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.005445    1594 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.190781    1594 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5bba0aa8-5d05-4858-b5af-a2456279867c-tmp\") pod \"storage-provisioner\" (UID: \"5bba0aa8-5d05-4858-b5af-a2456279867c\") "
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.190885    1594 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qszs7\" (UniqueName: \"kubernetes.io/projected/5bba0aa8-5d05-4858-b5af-a2456279867c-kube-api-access-qszs7\") pod \"storage-provisioner\" (UID: \"5bba0aa8-5d05-4858-b5af-a2456279867c\") "
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b] <==
	* 
	goroutine 111 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc0004b4b90, 0x3)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0004b4b80)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00013a4e0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000440c80, 0x18e5530, 0xc000046100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005e7200)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005e7200, 0x18b3d60, 0xc000272000, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005e7200, 0x3b9aca00, 0x0, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0005e7200, 0x3b9aca00, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:41:26.357723  192158 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/Pause (29.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (11.84s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210813203929-13784 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210813203929-13784 --output=json --layout=cluster: exit status 2 (379.936074ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210813203929-13784","StatusCode":101,"StatusName":"Pausing","Step":"Pausing","StepDetail":"* Pausing node pause-20210813203929-13784 ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210813203929-13784","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:41:27.364359  195169 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0813 20:41:27.364385  195169 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0813 20:41:27.364410  195169 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:190: incorrect status code: 101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-13784
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860",
	        "Created": "2021-08-13T20:39:31.372712772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:31.872578968Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hosts",
	        "LogPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860-json.log",
	        "Name": "/pause-20210813203929-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/merged",
	                "UpperDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/diff",
	                "WorkDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-13784",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a821792d507c6dabf086e5652e018123e85e4b030464132aafdef8bc15a9d200",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a821792d507c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ce53ded591b3"
	                    ],
	                    "NetworkID": "a8af35fe90fb5b850638bd77da889b067a8390ebee6680d76e896390e70a0e9e",
	                    "EndpointID": "0b310d5a393fb3e0184bcf23f10e5a3746cbeb23b4b202e9e5c6f681f15cdcfa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784: exit status 2 (364.990071ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25: exit status 110 (11.012125882s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:36:29 UTC | Fri, 13 Aug 2021 20:37:03 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker          |                                           |         |         |                               |                               |
	|         |  --container-runtime=crio                 |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3              |                                           |         |         |                               |                               |
	| ssh     | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:03 UTC | Fri, 13 Aug 2021 20:37:03 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	|         | -- sudo crictl image ls                   |                                           |         |         |                               |                               |
	| -p      | test-preload-20210813203431-13784         | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:04 UTC | Fri, 13 Aug 2021 20:37:05 UTC |
	|         | logs -n 25                                |                                           |         |         |                               |                               |
	| delete  | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:05 UTC | Fri, 13 Aug 2021 20:37:10 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	| start   | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:10 UTC | Fri, 13 Aug 2021 20:37:47 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --memory=2048 --driver=docker             |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:48 UTC | Fri, 13 Aug 2021 20:37:48 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --cancel-scheduled                        |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:00 UTC | Fri, 13 Aug 2021 20:38:26 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --schedule 5s                             |                                           |         |         |                               |                               |
	| delete  | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:28 UTC | Fri, 13 Aug 2021 20:38:33 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	| delete  | -p                                        | insufficient-storage-20210813203833-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:40 UTC | Fri, 13 Aug 2021 20:38:46 UTC |
	|         | insufficient-storage-20210813203833-13784 |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	|         | --memory=2048 --force-systemd             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:49 UTC | Fri, 13 Aug 2021 20:39:31 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=5 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:31 UTC | Fri, 13 Aug 2021 20:39:35 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	| start   | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:40:06 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                    |                                           |         |         |                               |                               |
	|         | --memory=2048 --wait=true                 |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:06 UTC | Fri, 13 Aug 2021 20:40:09 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	| delete  | -p                                        | kubenet-20210813204009-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:09 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | kubenet-20210813204009-13784              |                                           |         |         |                               |                               |
	| delete  | -p                                        | flannel-20210813204010-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:10 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | flannel-20210813204010-13784              |                                           |         |         |                               |                               |
	| delete  | -p false-20210813204010-13784             | false-20210813204010-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:11 UTC | Fri, 13 Aug 2021 20:40:11 UTC |
	| start   | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:35 UTC | Fri, 13 Aug 2021 20:40:23 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                 |                                           |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15             |                                           |         |         |                               |                               |
	|         | --apiserver-names=localhost               |                                           |         |         |                               |                               |
	|         | --apiserver-names=www.google.com          |                                           |         |         |                               |                               |
	|         | --apiserver-port=8555                     |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| -p      | cert-options-20210813203935-13784         | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:23 UTC | Fri, 13 Aug 2021 20:40:24 UTC |
	|         | ssh openssl x509 -text -noout -in         |                                           |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:24 UTC | Fri, 13 Aug 2021 20:40:27 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --install-addons=false                    |                                           |         |         |                               |                               |
	|         | --wait=all --driver=docker                |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:57 UTC |
	|         | --alsologtostderr                         |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:27 UTC | Fri, 13 Aug 2021 20:41:10 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	|         | --memory=2200                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0              |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:10 UTC | Fri, 13 Aug 2021 20:41:13 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:41:13
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:41:13.216170  190586 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:41:13.216266  190586 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:13.216275  190586 out.go:311] Setting ErrFile to fd 2...
	I0813 20:41:13.216278  190586 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:13.216397  190586 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:41:13.216623  190586 out.go:305] Setting JSON to false
	I0813 20:41:13.253752  190586 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5036,"bootTime":1628882237,"procs":259,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:41:13.253885  190586 start.go:121] virtualization: kvm guest
	I0813 20:41:13.256490  190586 out.go:177] * [kubernetes-upgrade-20210813204027-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:41:13.256589  190586 notify.go:169] Checking for updates...
	I0813 20:41:13.259158  190586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:13.260508  190586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:41:13.261955  190586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:41:13.263282  190586 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:41:13.263774  190586 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204027-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:41:13.264169  190586 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:41:13.321386  190586 docker.go:132] docker version: linux-19.03.15
	I0813 20:41:13.321545  190586 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:13.416471  190586 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 20:41:13.36746279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:41:13.416564  190586 docker.go:244] overlay module found
	I0813 20:41:13.418464  190586 out.go:177] * Using the docker driver based on existing profile
	I0813 20:41:13.418492  190586 start.go:278] selected driver: docker
	I0813 20:41:13.418499  190586 start.go:751] validating driver "docker" against &{Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:13.418589  190586 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:41:13.418632  190586 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:13.418675  190586 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:41:13.420085  190586 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:13.420924  190586 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:13.517347  190586 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 20:41:13.458391048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:41:13.517548  190586 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:13.517602  190586 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:41:13.519735  190586 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:13.519825  190586 cni.go:93] Creating CNI manager for ""
	I0813 20:41:13.519837  190586 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:41:13.519852  190586 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:13.521530  190586 out.go:177] * Starting control plane node kubernetes-upgrade-20210813204027-13784 in cluster kubernetes-upgrade-20210813204027-13784
	I0813 20:41:13.521575  190586 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:41:13.522935  190586 out.go:177] * Pulling base image ...
	I0813 20:41:13.522964  190586 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:41:13.523002  190586 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:41:13.523019  190586 cache.go:56] Caching tarball of preloaded images
	I0813 20:41:13.523073  190586 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:41:13.523200  190586 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:41:13.523219  190586 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 20:41:13.523365  190586 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/config.json ...
	I0813 20:41:13.620212  190586 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:41:13.620245  190586 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:41:13.620270  190586 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:41:13.620330  190586 start.go:313] acquiring machines lock for kubernetes-upgrade-20210813204027-13784: {Name:mk867fd1b3701cb21737f832aa092309ed957057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:13.620455  190586 start.go:317] acquired machines lock for "kubernetes-upgrade-20210813204027-13784" in 93.039µs
	I0813 20:41:13.620490  190586 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:41:13.620503  190586 fix.go:55] fixHost starting: 
	I0813 20:41:13.620859  190586 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:41:13.660909  190586 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210813204027-13784: state=Stopped err=<nil>
	W0813 20:41:13.660964  190586 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:41:12.761026  184062 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:41:12.761059  184062 machine.go:91] provisioned docker machine in 1.026170522s
	I0813 20:41:12.761071  184062 start.go:267] post-start starting for "missing-upgrade-20210813203846-13784" (driver="docker")
	I0813 20:41:12.761079  184062 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:12.761143  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:12.761189  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:12.803241  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:12.892576  184062 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:12.895195  184062 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:12.895221  184062 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:12.895234  184062 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:12.895242  184062 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:41:12.895253  184062 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:12.895306  184062 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:12.895406  184062 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:41:12.895524  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:12.901776  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:12.917411  184062 start.go:270] post-start completed in 156.325269ms
	I0813 20:41:12.917471  184062 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:12.917549  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:12.960313  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.094246  184062 fix.go:57] fixHost completed within 29.67467079s
	I0813 20:41:13.094280  184062 start.go:80] releasing machines lock for "missing-upgrade-20210813203846-13784", held for 29.674767502s
	I0813 20:41:13.094368  184062 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20210813203846-13784
	I0813 20:41:13.147968  184062 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:13.148022  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:13.148030  184062 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:13.148132  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:13.195943  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.196112  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.285243  184062 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:41:13.437575  184062 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:41:13.446764  184062 docker.go:153] disabling docker service ...
	I0813 20:41:13.446817  184062 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:41:13.456218  184062 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:41:13.466857  184062 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:41:13.543702  184062 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:41:13.626332  184062 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:41:13.636155  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:41:13.648901  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0813 20:41:13.657710  184062 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:41:13.663842  184062 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:41:13.663894  184062 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:41:13.670882  184062 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:41:13.676999  184062 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:41:13.742334  184062 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:41:13.753589  184062 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:41:13.753659  184062 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:41:13.756898  184062 start.go:413] Will wait 60s for crictl version
	I0813 20:41:13.756951  184062 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:13.785844  184062 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:41:13.785924  184062 ssh_runner.go:149] Run: crio --version
	I0813 20:41:13.850759  184062 ssh_runner.go:149] Run: crio --version
	I0813 20:41:13.920844  184062 out.go:177] * Preparing Kubernetes v1.18.0 on CRI-O 1.20.3 ...
	I0813 20:41:13.920925  184062 cli_runner.go:115] Run: docker network inspect missing-upgrade-20210813203846-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:13.958951  184062 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:13.962164  184062 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:13.971398  184062 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0813 20:41:13.971437  184062 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:13.998484  184062 crio.go:420] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0813 20:41:13.998515  184062 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 20:41:13.998594  184062 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:41:13.998627  184062 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0813 20:41:13.998773  184062 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0813 20:41:13.998785  184062 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0813 20:41:13.998841  184062 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0813 20:41:13.998600  184062 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0813 20:41:13.998900  184062 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0813 20:41:13.998921  184062 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:13.998972  184062 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0813 20:41:13.998996  184062 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:41:13.999586  184062 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:41:13.999678  184062 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:41:13.999586  184062 image.go:175] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0813 20:41:13.999892  184062 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:41:14.007760  184062 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:41:14.015685  184062 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{Image:0xc0001a12c0}
	I0813 20:41:14.015787  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.2
	I0813 20:41:14.352143  184062 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc0000a2780}
	I0813 20:41:14.352245  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:14.446608  184062 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0813 20:41:14.446656  184062 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:14.446707  184062 ssh_runner.go:149] Run: which crictl
	I0813 20:41:14.450067  184062 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:14.463872  184062 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc0004822a0}
	I0813 20:41:14.463963  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:41:14.482896  184062 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 20:41:14.482979  184062 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:41:14.538448  184062 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0813 20:41:14.538497  184062 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:41:14.538549  184062 ssh_runner.go:149] Run: which crictl
	I0813 20:41:14.538560  184062 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0813 20:41:14.538587  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0813 20:41:14.545218  184062 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:41:14.608311  184062 crio.go:191] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:41:14.608382  184062 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:41:14.624314  184062 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0813 20:41:14.624404  184062 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:41:14.810370  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0
	I0813 20:41:14.823794  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0
	I0813 20:41:14.831505  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0
	I0813 20:41:14.837157  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0
	I0813 20:41:14.875739  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7
	I0813 20:41:16.137715  184062 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.529294083s)
	I0813 20:41:16.137761  184062 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 20:41:16.137771  184062 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: (1.513340529s)
	I0813 20:41:16.137822  184062 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0813 20:41:16.137851  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes)
	I0813 20:41:16.137864  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0: (1.314032975s)
	I0813 20:41:16.137972  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0: (1.306436855s)
	I0813 20:41:16.138018  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0: (1.300838079s)
	I0813 20:41:16.138069  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7: (1.262310472s)
	I0813 20:41:16.138362  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0: (1.32794891s)
	I0813 20:41:16.384138  184062 crio.go:191] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:41:16.384215  184062 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:41:16.785742  184062 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000482320}
	I0813 20:41:16.785881  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:41:17.258650  184062 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc0014160a0}
	I0813 20:41:17.258765  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0813 20:41:13.663357  190586 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20210813204027-13784" ...
	I0813 20:41:13.663420  190586 cli_runner.go:115] Run: docker start kubernetes-upgrade-20210813204027-13784
	I0813 20:41:14.413173  190586 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:41:14.480738  190586 kic.go:420] container "kubernetes-upgrade-20210813204027-13784" state is running.
	I0813 20:41:14.481185  190586 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:14.533421  190586 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/config.json ...
	I0813 20:41:14.533652  190586 machine.go:88] provisioning docker machine ...
	I0813 20:41:14.533680  190586 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210813204027-13784"
	I0813 20:41:14.533741  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:14.593205  190586 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:14.593543  190586 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:14.593573  190586 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210813204027-13784 && echo "kubernetes-upgrade-20210813204027-13784" | sudo tee /etc/hostname
	I0813 20:41:14.594189  190586 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51748->127.0.0.1:32914: read: connection reset by peer
	I0813 20:41:17.757652  190586 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210813204027-13784
	
	I0813 20:41:17.757736  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:17.809908  190586 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:17.810150  190586 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:17.810194  190586 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210813204027-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210813204027-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210813204027-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:41:17.937019  190586 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:41:17.937073  190586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:41:17.937122  190586 ubuntu.go:177] setting up certificates
	I0813 20:41:17.937137  190586 provision.go:83] configureAuth start
	I0813 20:41:17.937207  190586 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:17.976748  190586 provision.go:138] copyHostCerts
	I0813 20:41:17.976838  190586 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:41:17.976852  190586 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:41:17.976905  190586 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:41:17.977012  190586 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:41:17.977030  190586 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:41:17.977056  190586 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:41:17.977139  190586 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:41:17.977150  190586 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:41:17.977173  190586 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:41:17.977240  190586 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210813204027-13784 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20210813204027-13784]
	I0813 20:41:18.160322  190586 provision.go:172] copyRemoteCerts
	I0813 20:41:18.160400  190586 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:41:18.160452  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:18.202108  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:18.332947  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:41:18.351215  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:41:18.370875  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0813 20:41:18.391442  190586 provision.go:86] duration metric: configureAuth took 454.287423ms
	I0813 20:41:18.391470  190586 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:41:18.391661  190586 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204027-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:41:18.391813  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:18.434661  190586 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:18.434912  190586 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:18.434948  190586 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:41:18.932995  190586 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:41:18.933031  190586 machine.go:91] provisioned docker machine in 4.399358736s
	I0813 20:41:18.933045  190586 start.go:267] post-start starting for "kubernetes-upgrade-20210813204027-13784" (driver="docker")
	I0813 20:41:18.933054  190586 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:18.933127  190586 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:18.933181  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:18.979315  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:19.078356  190586 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:19.081408  190586 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:19.081428  190586 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:19.081436  190586 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:19.081443  190586 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:41:19.081453  190586 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:19.081525  190586 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:19.081621  190586 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:41:19.081732  190586 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:19.088686  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:19.106549  190586 start.go:270] post-start completed in 173.486585ms
	I0813 20:41:19.106625  190586 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:19.106674  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:19.153258  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:19.237881  190586 fix.go:57] fixHost completed within 5.617373706s
	I0813 20:41:19.237915  190586 start.go:80] releasing machines lock for "kubernetes-upgrade-20210813204027-13784", held for 5.617441789s
	I0813 20:41:19.238030  190586 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:19.282919  190586 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:19.282963  190586 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:19.282980  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:19.283014  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:19.331699  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:19.340913  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:19.421595  190586 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:41:19.698844  190586 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:41:19.709693  190586 docker.go:153] disabling docker service ...
	I0813 20:41:19.709752  190586 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:41:19.719985  190586 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:41:19.730683  190586 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:41:19.815964  190586 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:41:19.909438  190586 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:41:19.919982  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:41:19.932677  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:41:19.940476  190586 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:41:19.940507  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:41:19.948544  190586 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:41:19.954722  190586 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:41:19.954777  190586 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:41:19.961861  190586 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:41:19.968312  190586 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:41:20.050000  190586 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:41:20.061375  190586 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:41:20.061451  190586 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:41:20.064986  190586 start.go:413] Will wait 60s for crictl version
	I0813 20:41:20.065043  190586 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:20.093436  190586 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:41:20.093577  190586 ssh_runner.go:149] Run: crio --version
	I0813 20:41:20.162541  190586 ssh_runner.go:149] Run: crio --version
	I0813 20:41:18.414126  184062 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/metrics-scraper_v1.0.4: (2.029879176s)
	I0813 20:41:18.414162  184062 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0813 20:41:18.414202  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (1.628288577s)
	I0813 20:41:18.414249  184062 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0813 20:41:18.414290  184062 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:41:18.414335  184062 ssh_runner.go:149] Run: which crictl
	I0813 20:41:18.444723  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0: (1.185922336s)
	I0813 20:41:18.444842  184062 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:41:18.470243  184062 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0813 20:41:18.470317  184062 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:41:18.473651  184062 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0813 20:41:18.473699  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes)
	I0813 20:41:18.601464  184062 crio.go:191] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:41:18.601567  184062 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:41:20.233110  190586 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0813 20:41:20.233191  190586 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210813204027-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:20.273989  190586 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:20.277343  190586 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:20.287295  190586 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:41:20.287358  190586 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:20.317405  190586 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 20:41:20.317467  190586 ssh_runner.go:149] Run: which lz4
	I0813 20:41:20.320544  190586 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:41:20.323473  190586 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:41:20.323498  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 20:41:21.457122  190586 crio.go:362] Took 1.136612 seconds to copy over tarball
	I0813 20:41:21.457197  190586 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:41:26.997935  184062 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/dashboard_v2.1.0: (8.396339719s)
	I0813 20:41:26.997964  184062 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache
	I0813 20:41:26.997986  184062 cache_images.go:113] Successfully loaded all cached images
	I0813 20:41:26.997999  184062 cache_images.go:82] LoadImages completed in 12.999469642s
	I0813 20:41:26.998074  184062 ssh_runner.go:149] Run: crio config
	I0813 20:41:27.079538  184062 cni.go:93] Creating CNI manager for ""
	I0813 20:41:27.079562  184062 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 20:41:27.079574  184062 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:41:27.079590  184062 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-20210813203846-13784 NodeName:missing-upgrade-20210813203846-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Client
CAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:41:27.079778  184062 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "missing-upgrade-20210813203846-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:41:27.079925  184062 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=missing-upgrade-20210813203846-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20210813203846-13784 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:41:27.079983  184062 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
	I0813 20:41:27.088035  184062 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:41:27.088103  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:41:27.096475  184062 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (567 bytes)
	I0813 20:41:27.110939  184062 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:41:27.129852  184062 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0813 20:41:27.145342  184062 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:41:27.149678  184062 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:27.162594  184062 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784 for IP: 192.168.67.2
	I0813 20:41:27.162661  184062 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:41:27.162685  184062 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:41:27.162763  184062 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/client.key
	I0813 20:41:27.162794  184062 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key.c7fa3a9e
	I0813 20:41:27.162813  184062 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:41:27.715222  184062 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt.c7fa3a9e ...
	I0813 20:41:27.715253  184062 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt.c7fa3a9e: {Name:mkb5af55e458384b2903d9e0638bf2c64cc9d2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:27.715435  184062 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key.c7fa3a9e ...
	I0813 20:41:27.715449  184062 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key.c7fa3a9e: {Name:mkdda624701aacf40cd5f637845da44b60bfbde3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:27.715550  184062 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt
	I0813 20:41:27.715664  184062 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key
	I0813 20:41:27.715735  184062 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/proxy-client.key
	I0813 20:41:27.715855  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:41:27.715892  184062 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:41:27.715902  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:41:27.715924  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:41:27.715949  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:41:27.715973  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:41:27.716074  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:27.716980  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:41:27.735519  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:28 UTC. --
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.135173785Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.136754653Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139480430Z" level=info msg="Conmon does support the --sync option"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139554604Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139563583Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.144516509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.147006953Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.149207086Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160317327Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160348934Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558550091Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-ts9sl Namespace:kube-system ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 NetNS:/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558791773Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 13 20:40:53 pause-20210813203929-13784 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.306861254Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.450483242Z" level=info msg="Ran pod sandbox c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 with infra container: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.451659183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452327272Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452980775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.453580166Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.454314289Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466027097Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/passwd: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466066274Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/group: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.662676174Z" level=info msg="Created container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.663296728Z" level=info msg="Starting container: 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.673744233Z" level=info msg="Started container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	8422317486aff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   31 seconds ago       Exited              storage-provisioner       0                   c9be4b40ae287
	f5e960ccbf41e       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   39 seconds ago       Running             coredns                   0                   32623516945f8
	0e66c2b5613f5       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   About a minute ago   Running             kindnet-cni               0                   3a829ab2057cc
	15fb32d86d158       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   125d82aa8b508
	765e30beb45ae       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   eae2bc9a9df7c
	ecf109c279e47       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   3c556f4397e88
	de897ce9eab3c       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   4483c604f9ed1
	0a93b9e0c15af       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   436db4ab23452
	
	* 
	* ==> coredns [f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.138700] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.390684] cgroup: cgroup2: unknown option "nsdelegate"
	[  +2.362662] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[  +6.305228] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.592043] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.311286] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a d3 6c 1f a1 fb 08 06        ........l.....
	[Aug13 20:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth638a0651
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 5b 20 34 63 04 08 06        .......[ 4c...
	[ +15.906177] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.377154] cgroup: cgroup2: unknown option "nsdelegate"
	[  +4.439794] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.000008] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.270961] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 1d 9a 5f 02 4e 08 06        ......j.._.N..
	[ +14.030429] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9a8d7a44
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d6 75 db 96 9a 5d 08 06        .......u...]..
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.579695] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803] <==
	* 2021-08-13 20:39:56.508638 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-13 20:40:14.023652 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" " with result "range_response_count:1 size:4260" took too long (353.715245ms) to execute
	2021-08-13 20:40:14.023710 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (385.087564ms) to execute
	2021-08-13 20:40:14.023730 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:0 size:5" took too long (742.628324ms) to execute
	2021-08-13 20:40:14.023825 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" " with result "range_response_count:1 size:210" took too long (742.874769ms) to execute
	2021-08-13 20:40:15.423385 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 20:40:15.593861 W | wal: sync duration of 1.564336648s, expected less than 1s
	2021-08-13 20:40:15.594635 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.454395348s) to execute
	2021-08-13 20:40:16.242009 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:1 size:210" took too long (641.602297ms) to execute
	2021-08-13 20:40:16.242033 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:0 size:5" took too long (640.465675ms) to execute
	2021-08-13 20:40:16.242101 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" " with result "range_response_count:1 size:4260" took too long (641.294407ms) to execute
	2021-08-13 20:40:17.051976 W | etcdserver: request "header:<ID:8128006947642344446 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" mod_revision:289 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" value_size:3977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" > >>" with result "size:16" took too long (432.247936ms) to execute
	2021-08-13 20:40:18.660824 W | wal: sync duration of 1.725448216s, expected less than 1s
	2021-08-13 20:40:18.929173 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.00007561s) to execute
	WARNING: 2021/08/13 20:40:18 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-13 20:40:19.784013 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (3.144407226s) to execute
	2021-08-13 20:40:19.784107 W | etcdserver: request "header:<ID:8128006947642344449 > lease_revoke:<id:70cc7b413e210346>" with result "size:29" took too long (1.12306053s) to execute
	2021-08-13 20:40:19.784416 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (2.727675957s) to execute
	2021-08-13 20:40:19.784657 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (2.728835843s) to execute
	2021-08-13 20:40:19.790810 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813203929-13784\" " with result "range_response_count:1 size:3976" took too long (848.789811ms) to execute
	2021-08-13 20:40:24.423766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:25.164612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:35.163828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:45.164164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:55.164376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:41:38 up  1:24,  0 users,  load average: 7.04, 3.83, 2.03
	Linux pause-20210813203929-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f] <==
	* I0813 20:40:19.785459       1 trace.go:205] Trace[528132474]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:18.938) (total time: 847ms):
	Trace[528132474]: ---"Object stored in database" 846ms (20:40:00.785)
	Trace[528132474]: [847.010131ms] [847.010131ms] END
	I0813 20:40:19.785471       1 trace.go:205] Trace[572814057]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813203929-13784,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:19.248) (total time: 537ms):
	Trace[572814057]: ---"Object stored in database" 536ms (20:40:00.785)
	Trace[572814057]: [537.086569ms] [537.086569ms] END
	I0813 20:40:19.785540       1 trace.go:205] Trace[1356449652]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/replicaset-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/tokens-controller,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:40:17.055) (total time: 2730ms):
	Trace[1356449652]: ---"About to write a response" 2729ms (20:40:00.785)
	Trace[1356449652]: [2.730061569s] [2.730061569s] END
	I0813 20:40:19.786359       1 trace.go:205] Trace[769020975]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:16.639) (total time: 3147ms):
	Trace[769020975]: [3.147238596s] [3.147238596s] END
	I0813 20:40:19.787020       1 trace.go:205] Trace[847010192]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/node-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/kube-controller-manager,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:40:17.056) (total time: 2730ms):
	Trace[847010192]: ---"About to write a response" 2730ms (20:40:00.786)
	Trace[847010192]: [2.730588631s] [2.730588631s] END
	I0813 20:40:19.787603       1 trace.go:205] Trace[2107042432]: "Patch" url:/api/v1/nodes/pause-20210813203929-13784/status,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:19.069) (total time: 718ms):
	Trace[2107042432]: ---"Object stored in database" 714ms (20:40:00.785)
	Trace[2107042432]: [718.430347ms] [718.430347ms] END
	I0813 20:40:19.793912       1 trace.go:205] Trace[1263737565]: "Get" url:/api/v1/namespaces/kube-system/pods/etcd-pause-20210813203929-13784,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:40:18.941) (total time: 852ms):
	Trace[1263737565]: ---"About to write a response" 852ms (20:40:00.793)
	Trace[1263737565]: [852.227308ms] [852.227308ms] END
	I0813 20:40:23.498807       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:40:23.848939       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:40:39.165407       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:40:39.165451       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:40:39.165458       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637] <==
	* I0813 20:40:22.945115       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0813 20:40:22.946089       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0813 20:40:22.946088       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:40:22.946247       1 shared_informer.go:247] Caches are synced for cronjob 
	I0813 20:40:22.946455       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:40:22.952128       1 shared_informer.go:247] Caches are synced for namespace 
	I0813 20:40:23.005106       1 shared_informer.go:247] Caches are synced for endpoint 
	I0813 20:40:23.145003       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:40:23.145579       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:23.150452       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 20:40:23.168777       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:40:23.168800       1 disruption.go:371] Sending events to api server.
	I0813 20:40:23.181021       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:40:23.507939       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pjb6w"
	I0813 20:40:23.517415       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k8wlb"
	E0813 20:40:23.548254       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"1eae9bdb-1aea-4c39-a2f9-a9df683878b4", ResourceVersion:"265", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484003, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00175a8d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00175a8e8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019c5800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a918), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a930), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5820)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5860)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00048fc20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001baacb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000c8a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000ae2ff0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001baad00)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:40:23.661112       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.661142       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:23.667920       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.851329       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:23.871778       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:23.965355       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ncl4r"
	I0813 20:40:23.980674       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ts9sl"
	I0813 20:40:24.022166       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-ncl4r"
	
	* 
	* ==> kube-proxy [15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e] <==
	* I0813 20:40:24.404639       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:40:24.404699       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:40:24.404733       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:24.484285       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:24.484325       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:24.484338       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:24.484352       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:24.484722       1 server.go:643] Version: v1.21.3
	I0813 20:40:24.485344       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:24.485368       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:40:24.485394       1 config.go:315] Starting service config controller
	I0813 20:40:24.485398       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 20:40:24.494950       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:24.496346       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:24.585594       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:24.585676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662] <==
	* E0813 20:40:00.668147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:00.668368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.668713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:00.669360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:00.669416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669423       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:00.669448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.680678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:00.680761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:00.680779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:01.515088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:01.520824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:01.529811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:01.530681       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:01.534900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:01.603579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:01.619709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:02.065154       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0813 20:41:17.023801       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	I0813 20:41:28.832649       1 trace.go:205] Trace[1216246411]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (13-Aug-2021 20:41:18.831) (total time: 10001ms):
	Trace[1216246411]: [10.001010096s] [10.001010096s] END
	E0813 20:41:28.832675       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=33": net/http: TLS handshake timeout
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:38 UTC. --
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:26.118723    1594 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/f8415df1-329a-4761-8b93-08dab691c8a1/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.118873    1594 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8415df1-329a-4761-8b93-08dab691c8a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f8415df1-329a-4761-8b93-08dab691c8a1" (UID: "f8415df1-329a-4761-8b93-08dab691c8a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.141958    1594 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8415df1-329a-4761-8b93-08dab691c8a1-kube-api-access-c659l" (OuterVolumeSpecName: "kube-api-access-c659l") pod "f8415df1-329a-4761-8b93-08dab691c8a1" (UID: "f8415df1-329a-4761-8b93-08dab691c8a1"). InnerVolumeSpecName "kube-api-access-c659l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.219158    1594 reconciler.go:319] "Volume detached for volume \"kube-api-access-c659l\" (UniqueName: \"kubernetes.io/projected/f8415df1-329a-4761-8b93-08dab691c8a1-kube-api-access-c659l\") on node \"pause-20210813203929-13784\" DevicePath \"\""
	Aug 13 20:40:26 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:26.219197    1594 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8415df1-329a-4761-8b93-08dab691c8a1-config-volume\") on node \"pause-20210813203929-13784\" DevicePath \"\""
	Aug 13 20:40:29 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:29.197345    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940415    1594 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940484    1594 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ts9sl"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940509    1594 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ts9sl"
	Aug 13 20:40:34 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:34.940578    1594 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-ts9sl_kube-system(da06b52c-7664-4a7e-98ae-ea1e61dc5560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-ts9sl_kube-system(da06b52c-7664-4a7e-98ae-ea1e61dc5560)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0(4aa551d78d59ab30ff1013d802f90915d1e769f16fbc57199576d558b98b7da3): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-ts9sl" podUID=da06b52c-7664-4a7e-98ae-ea1e61dc5560
	Aug 13 20:40:35 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:35.167096    1594 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ncl4r_kube-system_f8415df1-329a-4761-8b93-08dab691c8a1_0(c1acc36c8a211e5c0e2add67b80fda71e4e6a48ceab697fd761bcf3b536fd5f4): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 20:40:35 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:35.167217    1594 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-ncl4r_kube-system_f8415df1-329a-4761-8b93-08dab691c8a1_0(c1acc36c8a211e5c0e2add67b80fda71e4e6a48ceab697fd761bcf3b536fd5f4): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-ncl4r"
	Aug 13 20:40:39 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:39.269570    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:49 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:49.324415    1594 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/docker/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:53.056122    1594 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: W0813 20:40:53.056137    1594 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/var/run/crio/crio.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory". Reconnecting...
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057341    1594 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057421    1594 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:40:53 pause-20210813203929-13784 kubelet[1594]: E0813 20:40:53.057445    1594 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.005445    1594 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.190781    1594 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5bba0aa8-5d05-4858-b5af-a2456279867c-tmp\") pod \"storage-provisioner\" (UID: \"5bba0aa8-5d05-4858-b5af-a2456279867c\") "
	Aug 13 20:40:56 pause-20210813203929-13784 kubelet[1594]: I0813 20:40:56.190885    1594 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qszs7\" (UniqueName: \"kubernetes.io/projected/5bba0aa8-5d05-4858-b5af-a2456279867c-kube-api-access-qszs7\") pod \"storage-provisioner\" (UID: \"5bba0aa8-5d05-4858-b5af-a2456279867c\") "
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:40:57 pause-20210813203929-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b] <==
	* 
	goroutine 111 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc0004b4b90, 0x3)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0004b4b80)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00013a4e0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000440c80, 0x18e5530, 0xc000046100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005e7200)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005e7200, 0x18b3d60, 0xc000272000, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005e7200, 0x3b9aca00, 0x0, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0005e7200, 0x3b9aca00, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:41:38.492467  195378 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/VerifyStatus (11.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (30.32s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813203929-13784 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210813203929-13784 --alsologtostderr -v=5: exit status 80 (7.542637213s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210813203929-13784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:41:39.831541  197864 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:41:39.831758  197864 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:39.831793  197864 out.go:311] Setting ErrFile to fd 2...
	I0813 20:41:39.831805  197864 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:39.831983  197864 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:41:39.832224  197864 out.go:305] Setting JSON to false
	I0813 20:41:39.832249  197864 mustload.go:65] Loading cluster: pause-20210813203929-13784
	I0813 20:41:39.832641  197864 config.go:177] Loaded profile config "pause-20210813203929-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:41:39.833222  197864 cli_runner.go:115] Run: docker container inspect pause-20210813203929-13784 --format={{.State.Status}}
	I0813 20:41:39.899882  197864 host.go:66] Checking if "pause-20210813203929-13784" exists ...
	I0813 20:41:39.900775  197864 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210813203929-13784 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:41:39.903454  197864 out.go:177] * Pausing node pause-20210813203929-13784 ... 
	I0813 20:41:39.903486  197864 host.go:66] Checking if "pause-20210813203929-13784" exists ...
	I0813 20:41:39.903808  197864 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:39.903854  197864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210813203929-13784
	I0813 20:41:39.954250  197864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32890 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813203929-13784/id_rsa Username:docker}
	I0813 20:41:40.068082  197864 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:40.082441  197864 pause.go:50] kubelet running: true
	I0813 20:41:40.082520  197864 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:41:45.146547  197864 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (5.064002735s)
	I0813 20:41:45.146616  197864 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:41:45.146679  197864 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:41:45.230796  197864 cri.go:76] found id: "8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b"
	I0813 20:41:45.230825  197864 cri.go:76] found id: "f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5"
	I0813 20:41:45.230833  197864 cri.go:76] found id: "0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d"
	I0813 20:41:45.230839  197864 cri.go:76] found id: "15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e"
	I0813 20:41:45.230845  197864 cri.go:76] found id: "765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803"
	I0813 20:41:45.230851  197864 cri.go:76] found id: "ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662"
	I0813 20:41:45.230856  197864 cri.go:76] found id: "de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637"
	I0813 20:41:45.230862  197864 cri.go:76] found id: "0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f"
	I0813 20:41:45.230867  197864 cri.go:76] found id: ""
	I0813 20:41:45.230917  197864 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:41:45.277097  197864 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","pid":1319,"status":"running","bundle":"/run/containers/storage/overlay-containers/0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f/userdata","rootfs":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","created":"2021-08-13T20:39:55.733812047Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.502928921Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"k
ube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1f9c2357385946ac7fe204980efa453476a4deca8c4ac4cbe35940a89a7719a9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/containers/kube-apiserver/244c51
68\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e39f0a13297cca692cd1ef2164ddbdf6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kuberne
tes.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","pid":2173,"status":"running","bundle":"/run/containers/storage/overlay-containers/0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d/userdata","rootfs":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","created":"2021-08-13T20:40:24.257677146Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b0cd6686","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.
cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b0cd6686\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.097525291Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc33
9c7f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3e5826d24f6d1de5c71b419404034707941fd50f64525a5e6ed1d762070b7fc9/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\
"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/containers/kindnet-cni/5f00273b\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/199ebbdb-e768-4153-98da-db0adc339c7f/volumes/kubernetes.io~projected/kube-api-access-wjm59\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","kubernetes.io/config.seen":"2021-
08-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","pid":2103,"status":"running","bundle":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata","rootfs":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","created":"2021-08-13T20:40:23.941830088Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.522016337Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.ku
bernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.848566608Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-pjb6w","io.kubernetes.cri-o.Labels":"{\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.container.name\":\"POD\",\"controller-revision-hash\":\"7cdcb64568\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-db
e323a2b35f/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-pjb6w\",\"uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f2d0ed6cc47483f2712d55b35b991ce791b85910750ddfd4950832b94fd41ee8/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SeccompProfilePath":"runtime/de
fault","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/shm","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","pid":2167,"status":"running","bundle":"/run/containers/storage/overlay-containers/15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e/userdata","rootfs":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","created":"2021-08-13T20:40:24.19384482Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.
hash":"6ea07f15","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6ea07f15\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:24.079504068Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
"io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-pjb6w\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5b9ca7fa-6b03-4939-a057-dbe323a2b35f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pjb6w_5b9ca7fa-6b03-4939-a057-dbe323a2b35f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e9b7502426319c404e68879eb7e06bb28239362242151b8feaccddfd752f62e4/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-pjb6w_kube-system_5b9ca7fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-pjb6w_kube-system_5b9ca7
fa-6b03-4939-a057-dbe323a2b35f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/containers/kube-proxy/93f08aa8\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03-4939-a057-dbe323a2b35f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5b9ca7fa-6b03
-4939-a057-dbe323a2b35f/volumes/kubernetes.io~projected/kube-api-access-8mjqv\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-pjb6w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5b9ca7fa-6b03-4939-a057-dbe323a2b35f","kubernetes.io/config.seen":"2021-08-13T20:40:23.522016337Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","pid":2690,"status":"running","bundle":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata","rootfs":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe46b1f0d00ec0757ccb1ee5d1b565323/merged","created":"2021-08-13T20:40:49.101759063Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":
"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.983557664Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth9a8d7a44\",\"mac\":\"2a:cb:06:90:ab:63\"},{\"name\":\"eth0\",\"mac\":\"d6:75:db:96:9a:5d\",\"sandbox\":\"/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:48.952217174Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.HostNetwork":"false",
"io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-ts9sl","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-ts9sl\",\"uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4c34eb129ea2f96013066202813b8f5fe4
6b1f0d00ec0757ccb1ee5d1b565323/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","k8s-app":"kube
-dns","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","pid":2100,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata","rootfs":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","created":"2021-08-13T20:40:23.95809172Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:23.523876061Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3a829ab2057cc070db7b625eea9bac1158d09da50
66242b708533877fd257658","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:23.852094836Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-k8wlb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kindnet\",\"controller-revision-hash\":\"694b6fb659\",\"app\":\"kindnet\",\"tier\":\"node\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"io.kubernetes.pod.name\":\"kindnet-k8wlb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pod
s/kube-system_kindnet-k8wlb_199ebbdb-e768-4153-98da-db0adc339c7f/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-k8wlb\",\"uid\":\"199ebbdb-e768-4153-98da-db0adc339c7f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e3dc4d3fbf9c534f5e09127da862bcb04fc813860eaa2a906f4fa43d95bac014/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-k8wlb_kube-system_199ebbdb-e768-4153-98da-db0adc339c7f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658","io.
kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658/userdata/shm","io.kubernetes.pod.name":"kindnet-k8wlb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"199ebbdb-e768-4153-98da-db0adc339c7f","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-13T20:40:23.523876061Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","pid":1206,"status":"running","bundle":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata","rootfs":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","created":"2021-08-13T20:39:55.433788257Z","annotations":
{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"4ebf0a68eff661e9c135374acf699695\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967492810Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.272991516Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cr
i-o.KubeName":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210813203929-13784\",\"uid\":\"4ebf0a68eff661e9c135374acf699695\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9655e51a03de7827aeecf5127c5a68d29ad378872b264a13d5283bf831ede3b6/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.c
ri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source"
:"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","pid":1180,"status":"running","bundle":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata","rootfs":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","created":"2021-08-13T20:39:55.397924075Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967490591Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"436db4ab234524
af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.265342067Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210813203929-13784\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o
.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210813203929-13784_e39f0a13297cca692cd1ef2164ddbdf6/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210813203929-13784\",\"uid\":\"e39f0a13297cca692cd1ef2164ddbdf6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c2c27b3c4559944b868018c6c77b1f4aad13bb01c2299156438f62b4bbd13fce/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210813203929-13784_kube-system_e39f0a13297cca692cd1ef2164ddbdf6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kube
rnetes.cri-o.SandboxID":"436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e39f0a13297cca692cd1ef2164ddbdf6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"e39f0a13297cca692cd1ef2164ddbdf6","kubernetes.io/config.seen":"2021-08-13T20:39:49.967490591Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","pid":1187,"status":"running","bundle":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979d
fb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata","rootfs":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","created":"2021-08-13T20:39:55.39391506Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967491922Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.269717758Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNe
twork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"component\":\"kube-controller-manager\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210813203929-13784\",\"uid\":\"13241a9162471f4b325d1046e0460e76\",\"namespace\":\"kube-sy
stem\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7d8993037a204d383d07f8aff9fd5c20ee6097966b97298da1ff0aaca4a36bfb/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8/userdata/shm","io.kubernetes.pod.name":"
kube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","pid":1346,"status":"running","bundle":"/run/containers/storage/overlay-containers/765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803/userdata","rootfs":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","created":"2021-08-13T20:39:55.793791351Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"58d4e8b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container
.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"58d4e8b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.570582977Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\
",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/609a9d6e41fbc2fd99914518606aa9f9c00bb7282f2dd7b02cc7e10d2d944ee4/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false",
"io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4a6c9153825faff90e9c8767408e0ebc/containers/etcd/d93341cb\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-0
8-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b/userdata","rootfs":"/var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged","created":"2021-08-13T20:40:56.645788504Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f2d196de","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f2d196de\",\"io.kubernetes.co
ntainer.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:56.465829141Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-
provisioner_5bba0aa8-5d05-4858-b5af-a2456279867c/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\
"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/containers/storage-provisioner/2f2769ad\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5bba0aa8-5d05-4858-b5af-a2456279867c/volumes/kubernetes.io~projected/kube-api-access-qszs7\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5bba0aa8-5d05-4858-b5af-a2456279867c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provi
sioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:40:56.004913297Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","pid":3583,"status":"running","bundle":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata","rootfs":"/var/lib/containers/storage/overlay/581bfa6fb2fdccbb85ffc8650b99be95833
95c98d8baee0eb3569e7ac720cb36/merged","created":"2021-08-13T20:40:56.413738967Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T20:40:56.004913297Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\
":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:40:56.320190008Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"ad
donmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.pod.uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_5bba0aa8-5d05-4858-b5af-a2456279867c/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"5bba0aa8-5d05-4858-b5af-a2456279867c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/581bfa6fb2fdccbb85ffc8650b99be9583395c98d8baee0eb3569e7ac720cb36/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_5bba0aa8-5d05-4858-b5af-a2456279867c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings
":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"5bba0aa8-5d05-4858-b5af-a2456279867c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"sp
ec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T20:40:56.004913297Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","pid":1326,"status":"running","bundle":"/run/containers/storage/overlay-containers/de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637/userdata","rootfs":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","created":"2021-08-13T20:39:55.773726316Z","annotations":{"io.c
ontainer.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.507405986Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.
ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"13241a9162471f4b325d1046e0460e76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813203929-13784_13241a9162471f4b325d1046e0460e76/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3eec49cc4a67983c13126fc2672f97233337601e44cae67ecf733f44e61aecba/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4483c604f9ed1cdf2c5979dfb0443e21e7cd12254
37eb5b2ce6f1bc1f861baa8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210813203929-13784_kube-system_13241a9162471f4b325d1046e0460e76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/containers/kube-controller-manager/2eb45d77\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/13241a9162471f4b325d1046e0460e76/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/con
troller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.hash":"13241a9162471f4b325d1046e0460e76","kubernetes.io/config.seen":"2021-08-13T20:39:49.967491922Z","kubernetes.io/config.source":"file","org.systemd.pr
operty.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","pid":1190,"status":"running","bundle":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata","rootfs":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","created":"2021-08-13T20:39:55.407125139Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"4a6c9153825faff90e9c8767408e0ebc\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:39:49.967469748Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"eae2bc9a9df7cb31a686
4918acc7048e123e1e9569f4ad8b40e10399e0d3e89f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.263344855Z","io.kubernetes.cri-o.HostName":"pause-20210813203929-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210813203929-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210813203929-13784\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-paus
e-20210813203929-13784_4a6c9153825faff90e9c8767408e0ebc/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210813203929-13784\",\"uid\":\"4a6c9153825faff90e9c8767408e0ebc\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7cba59456a4edf1273dfb51cca49a88acb7d344720b2dad516f506ec5f5dac7b/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210813203929-13784_kube-system_4a6c9153825faff90e9c8767408e0ebc_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e
10399e0d3e89f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210813203929-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"4a6c9153825faff90e9c8767408e0ebc","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4a6c9153825faff90e9c8767408e0ebc","kubernetes.io/config.seen":"2021-08-13T20:39:49.967469748Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","pid":1337,"status":"running","bundle":"/run/containers/storage/overlay-containers/ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662/userdata","rootfs":"/var/lib/containers/storage/ove
rlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","created":"2021-08-13T20:39:55.793789894Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:39:55.519029007Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1
a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210813203929-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4ebf0a68eff661e9c135374acf699695\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210813203929-13784_4ebf0a68eff661e9c135374acf699695/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/79b08740d16c25a78016308795e25438f24615e65e0ea606b7378f63f7de14c5/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/ov
erlay-containers/3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210813203929-13784_kube-system_4ebf0a68eff661e9c135374acf699695_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4ebf0a68eff661e9c135374acf699695/containers/kube-scheduler/d30fe10b\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210813203929-13784","
io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.hash":"4ebf0a68eff661e9c135374acf699695","kubernetes.io/config.seen":"2021-08-13T20:39:49.967492810Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","pid":2722,"status":"running","bundle":"/run/containers/storage/overlay-containers/f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5/userdata","rootfs":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece564d0c97a04c0b83841935e/merged","created":"2021-08-13T20:40:49.313790208Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"287a3d56","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[
{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"287a3d56\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f5e960c
cbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:40:49.16097913Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-ts9sl\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"da06b52c-7664-4a7e-98ae-ea1e61dc5560\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-ts9sl_da06b52c-7664-4a7e-98ae-ea1e61dc5560/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/24f9e7b80dfc59aad0e59c2407e0ba11022c6fece564d
0c97a04c0b83841935e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-ts9sl_kube-system_da06b52c-7664-4a7e-98ae-ea1e61dc5560_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/etc-hosts\",\"readonly\":
false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/containers/coredns/88d76d02\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/da06b52c-7664-4a7e-98ae-ea1e61dc5560/volumes/kubernetes.io~projected/kube-api-access-mdnqp\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-ts9sl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"da06b52c-7664-4a7e-98ae-ea1e61dc5560","kubernetes.io/config.seen":"2021-08-13T20:40:23.983557664Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0813 20:41:45.278040  197864 cri.go:113] list returned 16 containers
	I0813 20:41:45.278060  197864 cri.go:116] container: {ID:0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f Status:running}
	I0813 20:41:45.278086  197864 cri.go:116] container: {ID:0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d Status:running}
	I0813 20:41:45.278093  197864 cri.go:116] container: {ID:125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 Status:running}
	I0813 20:41:45.278101  197864 cri.go:118] skipping 125d82aa8b508a740b5564623b92cd79b6803871c4da8be25563b0d393317a89 - not in ps
	I0813 20:41:45.278107  197864 cri.go:116] container: {ID:15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e Status:running}
	I0813 20:41:45.278114  197864 cri.go:116] container: {ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 Status:running}
	I0813 20:41:45.278123  197864 cri.go:118] skipping 32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 - not in ps
	I0813 20:41:45.278129  197864 cri.go:116] container: {ID:3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 Status:running}
	I0813 20:41:45.278137  197864 cri.go:118] skipping 3a829ab2057cc070db7b625eea9bac1158d09da5066242b708533877fd257658 - not in ps
	I0813 20:41:45.278142  197864 cri.go:116] container: {ID:3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b Status:running}
	I0813 20:41:45.278149  197864 cri.go:118] skipping 3c556f4397e887542ea1664fe30cd79f433d9bafc999fd1d3cfbfbf367bffa9b - not in ps
	I0813 20:41:45.278159  197864 cri.go:116] container: {ID:436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff Status:running}
	I0813 20:41:45.278166  197864 cri.go:118] skipping 436db4ab234524af774341bf87a7cafcf73b154dfe9a056ff8567e551e9f85ff - not in ps
	I0813 20:41:45.278174  197864 cri.go:116] container: {ID:4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 Status:running}
	I0813 20:41:45.278181  197864 cri.go:118] skipping 4483c604f9ed1cdf2c5979dfb0443e21e7cd1225437eb5b2ce6f1bc1f861baa8 - not in ps
	I0813 20:41:45.278187  197864 cri.go:116] container: {ID:765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803 Status:running}
	I0813 20:41:45.278193  197864 cri.go:116] container: {ID:8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b Status:stopped}
	I0813 20:41:45.278201  197864 cri.go:122] skipping {8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b stopped}: state = "stopped", want "running"
	I0813 20:41:45.278216  197864 cri.go:116] container: {ID:c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 Status:running}
	I0813 20:41:45.278224  197864 cri.go:118] skipping c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 - not in ps
	I0813 20:41:45.278232  197864 cri.go:116] container: {ID:de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637 Status:running}
	I0813 20:41:45.278240  197864 cri.go:116] container: {ID:eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f Status:running}
	I0813 20:41:45.278249  197864 cri.go:118] skipping eae2bc9a9df7cb31a6864918acc7048e123e1e9569f4ad8b40e10399e0d3e89f - not in ps
	I0813 20:41:45.278255  197864 cri.go:116] container: {ID:ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662 Status:running}
	I0813 20:41:45.278263  197864 cri.go:116] container: {ID:f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5 Status:running}
	I0813 20:41:45.278310  197864 ssh_runner.go:149] Run: sudo runc pause 0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f
	I0813 20:41:45.295536  197864 ssh_runner.go:149] Run: sudo runc pause 0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f 0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d
	I0813 20:41:47.127097  197864 out.go:177] 
	W0813 20:41:47.127260  197864 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f 0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:41:45Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f 0e66c2b5613f5c4253b269fadb962f9166323770b0d48f33ded79b6dd0247f6d: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:41:45Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:41:47.127276  197864 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:41:47.130011  197864 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:41:47.264529  197864 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210813203929-13784 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-13784
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860",
	        "Created": "2021-08-13T20:39:31.372712772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:31.872578968Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hosts",
	        "LogPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860-json.log",
	        "Name": "/pause-20210813203929-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/merged",
	                "UpperDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/diff",
	                "WorkDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-13784",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a821792d507c6dabf086e5652e018123e85e4b030464132aafdef8bc15a9d200",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a821792d507c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ce53ded591b3"
	                    ],
	                    "NetworkID": "a8af35fe90fb5b850638bd77da889b067a8390ebee6680d76e896390e70a0e9e",
	                    "EndpointID": "0b310d5a393fb3e0184bcf23f10e5a3746cbeb23b4b202e9e5c6f681f15cdcfa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784: exit status 2 (379.755599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25: exit status 110 (10.889031622s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                        | test-preload-20210813203431-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:05 UTC | Fri, 13 Aug 2021 20:37:10 UTC |
	|         | test-preload-20210813203431-13784         |                                           |         |         |                               |                               |
	| start   | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:10 UTC | Fri, 13 Aug 2021 20:37:47 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --memory=2048 --driver=docker             |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:48 UTC | Fri, 13 Aug 2021 20:37:48 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --cancel-scheduled                        |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:00 UTC | Fri, 13 Aug 2021 20:38:26 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --schedule 5s                             |                                           |         |         |                               |                               |
	| delete  | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:28 UTC | Fri, 13 Aug 2021 20:38:33 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	| delete  | -p                                        | insufficient-storage-20210813203833-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:40 UTC | Fri, 13 Aug 2021 20:38:46 UTC |
	|         | insufficient-storage-20210813203833-13784 |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	|         | --memory=2048 --force-systemd             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:49 UTC | Fri, 13 Aug 2021 20:39:31 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=5 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:31 UTC | Fri, 13 Aug 2021 20:39:35 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	| start   | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:40:06 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                    |                                           |         |         |                               |                               |
	|         | --memory=2048 --wait=true                 |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:06 UTC | Fri, 13 Aug 2021 20:40:09 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	| delete  | -p                                        | kubenet-20210813204009-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:09 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | kubenet-20210813204009-13784              |                                           |         |         |                               |                               |
	| delete  | -p                                        | flannel-20210813204010-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:10 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | flannel-20210813204010-13784              |                                           |         |         |                               |                               |
	| delete  | -p false-20210813204010-13784             | false-20210813204010-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:11 UTC | Fri, 13 Aug 2021 20:40:11 UTC |
	| start   | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:35 UTC | Fri, 13 Aug 2021 20:40:23 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                 |                                           |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15             |                                           |         |         |                               |                               |
	|         | --apiserver-names=localhost               |                                           |         |         |                               |                               |
	|         | --apiserver-names=www.google.com          |                                           |         |         |                               |                               |
	|         | --apiserver-port=8555                     |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| -p      | cert-options-20210813203935-13784         | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:23 UTC | Fri, 13 Aug 2021 20:40:24 UTC |
	|         | ssh openssl x509 -text -noout -in         |                                           |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:24 UTC | Fri, 13 Aug 2021 20:40:27 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --install-addons=false                    |                                           |         |         |                               |                               |
	|         | --wait=all --driver=docker                |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:57 UTC |
	|         | --alsologtostderr                         |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:27 UTC | Fri, 13 Aug 2021 20:41:10 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	|         | --memory=2200                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0              |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:10 UTC | Fri, 13 Aug 2021 20:41:13 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	| unpause | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:38 UTC | Fri, 13 Aug 2021 20:41:39 UTC |
	|         | --alsologtostderr -v=5                    |                                           |         |         |                               |                               |
	| start   | -p                                        | missing-upgrade-20210813203846-13784      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:42 UTC | Fri, 13 Aug 2021 20:41:40 UTC |
	|         | missing-upgrade-20210813203846-13784      |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | missing-upgrade-20210813203846-13784      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:40 UTC | Fri, 13 Aug 2021 20:41:43 UTC |
	|         | missing-upgrade-20210813203846-13784      |                                           |         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:41:13
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:41:13.216170  190586 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:41:13.216266  190586 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:13.216275  190586 out.go:311] Setting ErrFile to fd 2...
	I0813 20:41:13.216278  190586 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:13.216397  190586 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:41:13.216623  190586 out.go:305] Setting JSON to false
	I0813 20:41:13.253752  190586 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5036,"bootTime":1628882237,"procs":259,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:41:13.253885  190586 start.go:121] virtualization: kvm guest
	I0813 20:41:13.256490  190586 out.go:177] * [kubernetes-upgrade-20210813204027-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:41:13.256589  190586 notify.go:169] Checking for updates...
	I0813 20:41:13.259158  190586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:13.260508  190586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:41:13.261955  190586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:41:13.263282  190586 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:41:13.263774  190586 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204027-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:41:13.264169  190586 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:41:13.321386  190586 docker.go:132] docker version: linux-19.03.15
	I0813 20:41:13.321545  190586 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:13.416471  190586 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 20:41:13.36746279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:41:13.416564  190586 docker.go:244] overlay module found
	I0813 20:41:13.418464  190586 out.go:177] * Using the docker driver based on existing profile
	I0813 20:41:13.418492  190586 start.go:278] selected driver: docker
	I0813 20:41:13.418499  190586 start.go:751] validating driver "docker" against &{Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:13.418589  190586 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:41:13.418632  190586 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:13.418675  190586 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:41:13.420085  190586 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:13.420924  190586 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:13.517347  190586 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 20:41:13.458391048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:41:13.517548  190586 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:13.517602  190586 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:41:13.519735  190586 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:13.519825  190586 cni.go:93] Creating CNI manager for ""
	I0813 20:41:13.519837  190586 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:41:13.519852  190586 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:13.521530  190586 out.go:177] * Starting control plane node kubernetes-upgrade-20210813204027-13784 in cluster kubernetes-upgrade-20210813204027-13784
	I0813 20:41:13.521575  190586 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:41:13.522935  190586 out.go:177] * Pulling base image ...
	I0813 20:41:13.522964  190586 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:41:13.523002  190586 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:41:13.523019  190586 cache.go:56] Caching tarball of preloaded images
	I0813 20:41:13.523073  190586 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:41:13.523200  190586 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:41:13.523219  190586 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 20:41:13.523365  190586 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/config.json ...
	I0813 20:41:13.620212  190586 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:41:13.620245  190586 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:41:13.620270  190586 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:41:13.620330  190586 start.go:313] acquiring machines lock for kubernetes-upgrade-20210813204027-13784: {Name:mk867fd1b3701cb21737f832aa092309ed957057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:13.620455  190586 start.go:317] acquired machines lock for "kubernetes-upgrade-20210813204027-13784" in 93.039µs
	I0813 20:41:13.620490  190586 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:41:13.620503  190586 fix.go:55] fixHost starting: 
	I0813 20:41:13.620859  190586 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:41:13.660909  190586 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210813204027-13784: state=Stopped err=<nil>
	W0813 20:41:13.660964  190586 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:41:12.761026  184062 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:41:12.761059  184062 machine.go:91] provisioned docker machine in 1.026170522s
	I0813 20:41:12.761071  184062 start.go:267] post-start starting for "missing-upgrade-20210813203846-13784" (driver="docker")
	I0813 20:41:12.761079  184062 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:12.761143  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:12.761189  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:12.803241  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:12.892576  184062 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:12.895195  184062 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:12.895221  184062 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:12.895234  184062 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:12.895242  184062 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:41:12.895253  184062 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:12.895306  184062 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:12.895406  184062 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:41:12.895524  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:12.901776  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:12.917411  184062 start.go:270] post-start completed in 156.325269ms
	I0813 20:41:12.917471  184062 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:12.917549  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:12.960313  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.094246  184062 fix.go:57] fixHost completed within 29.67467079s
	I0813 20:41:13.094280  184062 start.go:80] releasing machines lock for "missing-upgrade-20210813203846-13784", held for 29.674767502s
	I0813 20:41:13.094368  184062 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20210813203846-13784
	I0813 20:41:13.147968  184062 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:13.148022  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:13.148030  184062 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:13.148132  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:13.195943  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.196112  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:13.285243  184062 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:41:13.437575  184062 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:41:13.446764  184062 docker.go:153] disabling docker service ...
	I0813 20:41:13.446817  184062 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:41:13.456218  184062 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:41:13.466857  184062 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:41:13.543702  184062 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:41:13.626332  184062 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:41:13.636155  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:41:13.648901  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0813 20:41:13.657710  184062 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:41:13.663842  184062 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:41:13.663894  184062 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:41:13.670882  184062 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:41:13.676999  184062 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:41:13.742334  184062 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:41:13.753589  184062 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:41:13.753659  184062 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:41:13.756898  184062 start.go:413] Will wait 60s for crictl version
	I0813 20:41:13.756951  184062 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:13.785844  184062 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:41:13.785924  184062 ssh_runner.go:149] Run: crio --version
	I0813 20:41:13.850759  184062 ssh_runner.go:149] Run: crio --version
	I0813 20:41:13.920844  184062 out.go:177] * Preparing Kubernetes v1.18.0 on CRI-O 1.20.3 ...
	I0813 20:41:13.920925  184062 cli_runner.go:115] Run: docker network inspect missing-upgrade-20210813203846-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:13.958951  184062 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:13.962164  184062 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:13.971398  184062 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0813 20:41:13.971437  184062 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:13.998484  184062 crio.go:420] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0813 20:41:13.998515  184062 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 20:41:13.998594  184062 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:41:13.998627  184062 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0813 20:41:13.998773  184062 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0813 20:41:13.998785  184062 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0813 20:41:13.998841  184062 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0813 20:41:13.998600  184062 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0813 20:41:13.998900  184062 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0813 20:41:13.998921  184062 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:13.998972  184062 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0813 20:41:13.998996  184062 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:41:13.999586  184062 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:41:13.999678  184062 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:41:13.999586  184062 image.go:175] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0813 20:41:13.999892  184062 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:41:14.007760  184062 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0813 20:41:14.015685  184062 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{Image:0xc0001a12c0}
	I0813 20:41:14.015787  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.2
	I0813 20:41:14.352143  184062 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc0000a2780}
	I0813 20:41:14.352245  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:14.446608  184062 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0813 20:41:14.446656  184062 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:14.446707  184062 ssh_runner.go:149] Run: which crictl
	I0813 20:41:14.450067  184062 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:14.463872  184062 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc0004822a0}
	I0813 20:41:14.463963  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:41:14.482896  184062 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 20:41:14.482979  184062 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:41:14.538448  184062 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0813 20:41:14.538497  184062 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:41:14.538549  184062 ssh_runner.go:149] Run: which crictl
	I0813 20:41:14.538560  184062 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0813 20:41:14.538587  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0813 20:41:14.545218  184062 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:41:14.608311  184062 crio.go:191] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:41:14.608382  184062 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:41:14.624314  184062 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0813 20:41:14.624404  184062 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:41:14.810370  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0
	I0813 20:41:14.823794  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0
	I0813 20:41:14.831505  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0
	I0813 20:41:14.837157  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0
	I0813 20:41:14.875739  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7
	I0813 20:41:16.137715  184062 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.529294083s)
	I0813 20:41:16.137761  184062 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 20:41:16.137771  184062 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: (1.513340529s)
	I0813 20:41:16.137822  184062 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0813 20:41:16.137851  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes)
	I0813 20:41:16.137864  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0: (1.314032975s)
	I0813 20:41:16.137972  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0: (1.306436855s)
	I0813 20:41:16.138018  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0: (1.300838079s)
	I0813 20:41:16.138069  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7: (1.262310472s)
	I0813 20:41:16.138362  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0: (1.32794891s)
	I0813 20:41:16.384138  184062 crio.go:191] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:41:16.384215  184062 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:41:16.785742  184062 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000482320}
	I0813 20:41:16.785881  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:41:17.258650  184062 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc0014160a0}
	I0813 20:41:17.258765  184062 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0813 20:41:13.663357  190586 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20210813204027-13784" ...
	I0813 20:41:13.663420  190586 cli_runner.go:115] Run: docker start kubernetes-upgrade-20210813204027-13784
	I0813 20:41:14.413173  190586 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:41:14.480738  190586 kic.go:420] container "kubernetes-upgrade-20210813204027-13784" state is running.
	I0813 20:41:14.481185  190586 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:14.533421  190586 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/config.json ...
	I0813 20:41:14.533652  190586 machine.go:88] provisioning docker machine ...
	I0813 20:41:14.533680  190586 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210813204027-13784"
	I0813 20:41:14.533741  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:14.593205  190586 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:14.593543  190586 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:14.593573  190586 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210813204027-13784 && echo "kubernetes-upgrade-20210813204027-13784" | sudo tee /etc/hostname
	I0813 20:41:14.594189  190586 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51748->127.0.0.1:32914: read: connection reset by peer
	I0813 20:41:17.757652  190586 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210813204027-13784
	
	I0813 20:41:17.757736  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:17.809908  190586 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:17.810150  190586 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:17.810194  190586 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210813204027-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210813204027-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210813204027-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:41:17.937019  190586 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:41:17.937073  190586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:41:17.937122  190586 ubuntu.go:177] setting up certificates
	I0813 20:41:17.937137  190586 provision.go:83] configureAuth start
	I0813 20:41:17.937207  190586 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:17.976748  190586 provision.go:138] copyHostCerts
	I0813 20:41:17.976838  190586 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:41:17.976852  190586 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:41:17.976905  190586 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:41:17.977012  190586 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:41:17.977030  190586 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:41:17.977056  190586 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:41:17.977139  190586 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:41:17.977150  190586 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:41:17.977173  190586 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:41:17.977240  190586 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210813204027-13784 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20210813204027-13784]
	I0813 20:41:18.160322  190586 provision.go:172] copyRemoteCerts
	I0813 20:41:18.160400  190586 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:41:18.160452  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:18.202108  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:18.332947  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:41:18.351215  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:41:18.370875  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0813 20:41:18.391442  190586 provision.go:86] duration metric: configureAuth took 454.287423ms
	I0813 20:41:18.391470  190586 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:41:18.391661  190586 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204027-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:41:18.391813  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:18.434661  190586 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:18.434912  190586 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:18.434948  190586 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:41:18.932995  190586 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:41:18.933031  190586 machine.go:91] provisioned docker machine in 4.399358736s
	I0813 20:41:18.933045  190586 start.go:267] post-start starting for "kubernetes-upgrade-20210813204027-13784" (driver="docker")
	I0813 20:41:18.933054  190586 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:18.933127  190586 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:18.933181  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:18.979315  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:19.078356  190586 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:19.081408  190586 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:19.081428  190586 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:19.081436  190586 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:19.081443  190586 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:41:19.081453  190586 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:19.081525  190586 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:19.081621  190586 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:41:19.081732  190586 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:19.088686  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:19.106549  190586 start.go:270] post-start completed in 173.486585ms
	I0813 20:41:19.106625  190586 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:19.106674  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:19.153258  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:19.237881  190586 fix.go:57] fixHost completed within 5.617373706s
	I0813 20:41:19.237915  190586 start.go:80] releasing machines lock for "kubernetes-upgrade-20210813204027-13784", held for 5.617441789s
	I0813 20:41:19.238030  190586 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:19.282919  190586 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:19.282963  190586 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:19.282980  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:19.283014  190586 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:19.331699  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:19.340913  190586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:19.421595  190586 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:41:19.698844  190586 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:41:19.709693  190586 docker.go:153] disabling docker service ...
	I0813 20:41:19.709752  190586 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:41:19.719985  190586 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:41:19.730683  190586 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:41:19.815964  190586 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:41:19.909438  190586 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:41:19.919982  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:41:19.932677  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:41:19.940476  190586 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:41:19.940507  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:41:19.948544  190586 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:41:19.954722  190586 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:41:19.954777  190586 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:41:19.961861  190586 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:41:19.968312  190586 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:41:20.050000  190586 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:41:20.061375  190586 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:41:20.061451  190586 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:41:20.064986  190586 start.go:413] Will wait 60s for crictl version
	I0813 20:41:20.065043  190586 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:20.093436  190586 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:41:20.093577  190586 ssh_runner.go:149] Run: crio --version
	I0813 20:41:20.162541  190586 ssh_runner.go:149] Run: crio --version
	I0813 20:41:18.414126  184062 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/metrics-scraper_v1.0.4: (2.029879176s)
	I0813 20:41:18.414162  184062 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0813 20:41:18.414202  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (1.628288577s)
	I0813 20:41:18.414249  184062 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0813 20:41:18.414290  184062 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:41:18.414335  184062 ssh_runner.go:149] Run: which crictl
	I0813 20:41:18.444723  184062 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0: (1.185922336s)
	I0813 20:41:18.444842  184062 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:41:18.470243  184062 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0813 20:41:18.470317  184062 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:41:18.473651  184062 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0813 20:41:18.473699  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes)
	I0813 20:41:18.601464  184062 crio.go:191] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:41:18.601567  184062 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:41:20.233110  190586 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0813 20:41:20.233191  190586 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210813204027-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:20.273989  190586 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:20.277343  190586 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:20.287295  190586 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:41:20.287358  190586 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:20.317405  190586 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 20:41:20.317467  190586 ssh_runner.go:149] Run: which lz4
	I0813 20:41:20.320544  190586 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:41:20.323473  190586 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:41:20.323498  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (590981257 bytes)
	I0813 20:41:21.457122  190586 crio.go:362] Took 1.136612 seconds to copy over tarball
	I0813 20:41:21.457197  190586 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:41:26.997935  184062 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/dashboard_v2.1.0: (8.396339719s)
	I0813 20:41:26.997964  184062 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache
	I0813 20:41:26.997986  184062 cache_images.go:113] Successfully loaded all cached images
	I0813 20:41:26.997999  184062 cache_images.go:82] LoadImages completed in 12.999469642s
	I0813 20:41:26.998074  184062 ssh_runner.go:149] Run: crio config
	I0813 20:41:27.079538  184062 cni.go:93] Creating CNI manager for ""
	I0813 20:41:27.079562  184062 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 20:41:27.079574  184062 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:41:27.079590  184062 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-20210813203846-13784 NodeName:missing-upgrade-20210813203846-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Client
CAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:41:27.079778  184062 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "missing-upgrade-20210813203846-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:41:27.079925  184062 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=missing-upgrade-20210813203846-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20210813203846-13784 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:41:27.079983  184062 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
	I0813 20:41:27.088035  184062 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:41:27.088103  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:41:27.096475  184062 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (567 bytes)
	I0813 20:41:27.110939  184062 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:41:27.129852  184062 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0813 20:41:27.145342  184062 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:41:27.149678  184062 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:27.162594  184062 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784 for IP: 192.168.67.2
	I0813 20:41:27.162661  184062 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:41:27.162685  184062 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:41:27.162763  184062 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/client.key
	I0813 20:41:27.162794  184062 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key.c7fa3a9e
	I0813 20:41:27.162813  184062 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:41:27.715222  184062 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt.c7fa3a9e ...
	I0813 20:41:27.715253  184062 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt.c7fa3a9e: {Name:mkb5af55e458384b2903d9e0638bf2c64cc9d2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:27.715435  184062 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key.c7fa3a9e ...
	I0813 20:41:27.715449  184062 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key.c7fa3a9e: {Name:mkdda624701aacf40cd5f637845da44b60bfbde3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:27.715550  184062 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt
	I0813 20:41:27.715664  184062 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key
	I0813 20:41:27.715735  184062 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/proxy-client.key
	I0813 20:41:27.715855  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:41:27.715892  184062 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:41:27.715902  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:41:27.715924  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:41:27.715949  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:41:27.715973  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:41:27.716074  184062 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:27.716980  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:41:27.735519  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:41:26.092946  190586 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.63571463s)
	I0813 20:41:26.092980  190586 crio.go:369] Took 4.635827 seconds t extract the tarball
	I0813 20:41:26.092994  190586 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:41:26.174945  190586 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:26.205417  190586 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:41:26.205447  190586 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:41:26.205558  190586 ssh_runner.go:149] Run: crio config
	I0813 20:41:26.287848  190586 cni.go:93] Creating CNI manager for ""
	I0813 20:41:26.287873  190586 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:41:26.287887  190586 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:41:26.287904  190586 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210813204027-13784 NodeName:kubernetes-upgrade-20210813204027-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:sys
temd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:41:26.288086  190586 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-20210813204027-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:41:26.288208  190586 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-20210813204027-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:41:26.288282  190586 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 20:41:26.295883  190586 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:41:26.295953  190586 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:41:26.305461  190586 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (575 bytes)
	I0813 20:41:26.317886  190586 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 20:41:26.329713  190586 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I0813 20:41:26.341321  190586 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:41:26.344168  190586 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:41:26.352614  190586 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784 for IP: 192.168.58.2
	I0813 20:41:26.352660  190586 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:41:26.352678  190586 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:41:26.352741  190586 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/client.key
	I0813 20:41:26.352763  190586 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/apiserver.key.cee25041
	I0813 20:41:26.352781  190586 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/proxy-client.key
	I0813 20:41:26.352882  190586 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:41:26.352918  190586 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:41:26.352931  190586 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:41:26.352970  190586 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:41:26.352996  190586 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:41:26.353017  190586 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:41:26.353060  190586 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:26.353993  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:41:26.371066  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:41:26.389018  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:41:26.406312  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:41:26.424575  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:41:26.440416  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:41:26.455971  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:41:26.472349  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:41:26.489136  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:41:26.506749  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:41:26.525017  190586 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:41:26.540722  190586 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:41:26.552342  190586 ssh_runner.go:149] Run: openssl version
	I0813 20:41:26.556919  190586 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:41:26.565289  190586 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:41:26.569278  190586 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:41:26.569328  190586 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:41:26.575491  190586 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:41:26.582355  190586 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:41:26.590251  190586 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:26.593475  190586 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:26.593582  190586 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:26.598307  190586 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:41:26.604588  190586 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:41:26.611373  190586 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:41:26.614337  190586 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:41:26.614382  190586 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:41:26.619021  190586 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:41:26.625186  190586 kubeadm.go:390] StartCluster: {Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:26.625276  190586 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:41:26.625315  190586 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:26.647799  190586 cri.go:76] found id: ""
	I0813 20:41:26.647859  190586 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:41:26.654776  190586 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:41:26.654796  190586 kubeadm.go:600] restartCluster start
	I0813 20:41:26.654837  190586 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:41:26.661774  190586 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:41:26.662658  190586 kubeconfig.go:117] verify returned: extract IP: "kubernetes-upgrade-20210813204027-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:26.662965  190586 kubeconfig.go:128] "kubernetes-upgrade-20210813204027-13784" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:41:26.663514  190586 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:26.664585  190586 kapi.go:59] client config for kubernetes-upgrade-20210813204027-13784: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kuberne
tes-upgrade-20210813204027-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:26.666329  190586 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:41:26.672965  190586 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-13 20:40:37.032934541 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-13 20:41:26.336409619 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.58.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.58.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	@@ -31,7 +31,7 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20210813204027-13784
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	@@ -39,8 +39,8 @@
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	-kubernetesVersion: v1.14.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.22.0-rc.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0813 20:41:26.672989  190586 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:41:26.673004  190586 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:41:26.673044  190586 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:26.703628  190586 cri.go:76] found id: ""
	I0813 20:41:26.703734  190586 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:41:26.713160  190586 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:41:26.720063  190586 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5759 Aug 13 20:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5799 Aug 13 20:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5959 Aug 13 20:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5747 Aug 13 20:40 /etc/kubernetes/scheduler.conf
	
	I0813 20:41:26.720128  190586 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:41:26.726738  190586 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:41:26.733312  190586 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:41:26.749039  190586 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:41:26.756914  190586 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:41:26.765777  190586 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:41:26.765806  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:26.823810  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:27.898955  190586 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075112271s)
	I0813 20:41:27.898992  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:28.039392  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:28.103357  190586 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:27.776481  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:41:27.796644  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:41:27.817778  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:41:27.842736  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:41:27.861176  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:41:27.881205  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:41:27.902643  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:41:27.921319  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:41:27.940981  184062 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:41:27.958602  184062 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0813 20:41:27.973075  184062 ssh_runner.go:149] Run: openssl version
	I0813 20:41:27.981776  184062 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:41:27.990546  184062 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:41:27.993799  184062 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:41:27.993853  184062 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:41:27.999021  184062 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:41:28.006931  184062 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:41:28.015288  184062 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:41:28.018524  184062 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:41:28.018569  184062 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:41:28.023684  184062 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:41:28.031176  184062 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:41:28.039731  184062 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:28.043279  184062 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:28.043323  184062 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:28.048825  184062 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:41:28.056324  184062 kubeadm.go:390] StartCluster: {Name:missing-upgrade-20210813203846-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20210813203846-13784 Namespace: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.67.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:28.056409  184062 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:41:28.056453  184062 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:28.085108  184062 cri.go:76] found id: ""
	I0813 20:41:28.085185  184062 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:41:28.093212  184062 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:41:28.093239  184062 kubeadm.go:600] restartCluster start
	I0813 20:41:28.093294  184062 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:41:28.102953  184062 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:41:28.104132  184062 kubeconfig.go:93] found "missing-upgrade-20210813203846-13784" server: "https://172.17.0.4:8443"
	I0813 20:41:28.104162  184062 kubeconfig.go:117] verify returned: got: 172.17.0.4:8443, want: 192.168.67.2:8443
	I0813 20:41:28.105250  184062 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:28.106404  184062 kapi.go:59] client config for missing-upgrade-20210813203846-13784: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgra
de-20210813203846-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:28.108252  184062 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:41:28.116591  184062 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-13 20:40:12.179182759 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-13 20:41:27.140466292 +0000
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.0.4
	+  advertiseAddress: 192.168.67.2
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,23 +14,32 @@
	   criSocket: /var/run/crio/crio.sock
	   name: "missing-upgrade-20210813203846-13784"
	   kubeletExtraArgs:
	-    node-ip: 172.17.0.4
	+    node-ip: 192.168.67.2
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.0.4"]
	+  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+controllerManager:
	+  extraArgs:
	+    allocate-node-cidrs: "true"
	+    leader-elect: "false"
	+scheduler:
	+  extraArgs:
	+    leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	-controlPlaneEndpoint: 172.17.0.4:8443
	+controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	+    extraArgs:
	+      proxy-refresh-interval: "70000"
	 kubernetesVersion: v1.18.0
	 networking:
	   dnsDomain: cluster.local
	@@ -39,13 +48,27 @@
	 ---
	 apiVersion: kubelet.config.k8s.io/v1beta1
	 kind: KubeletConfiguration
	+authentication:
	+  x509:
	+    clientCAFile: /var/lib/minikube/certs/ca.crt
	+cgroupDriver: systemd
	+clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	 evictionHard:
	   nodefs.available: "0%!"(MISSING)
	   nodefs.inodesFree: "0%!"(MISSING)
	   imagefs.available: "0%!"(MISSING)
	+failSwapOn: false
	+staticPodPath: /etc/kubernetes/manifests
	 ---
	 apiVersion: kubeproxy.config.k8s.io/v1alpha1
	 kind: KubeProxyConfiguration
	-metricsBindAddress: 172.17.0.4:10249
	+clusterCIDR: "10.244.0.0/16"
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0813 20:41:28.116612  184062 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:41:28.116628  184062 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:41:28.116675  184062 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:28.142508  184062 cri.go:76] found id: ""
	I0813 20:41:28.142562  184062 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	W0813 20:41:28.151218  184062 kubeadm.go:656] Failed to stop kubelet, this might cause upgrade errors: sudo systemctl stop kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to stop kubelet.service: Unit kubelet.service not loaded.
	I0813 20:41:28.151280  184062 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:41:28.158934  184062 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:41:28.158991  184062 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:41:28.167013  184062 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:41:28.167033  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:28.223747  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:29.349186  184062 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125402899s)
	I0813 20:41:29.349222  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:29.490926  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:29.547265  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:29.606450  184062 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:41:29.606505  184062 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:30.120899  184062 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:30.620752  184062 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:31.121065  184062 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:31.175565  184062 api_server.go:70] duration metric: took 1.569109378s to wait for apiserver process to appear ...
	I0813 20:41:31.175595  184062 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:41:31.175610  184062 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:41:28.191539  190586 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:41:28.191600  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:28.707517  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:29.207258  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:29.706988  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:30.207504  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:30.707374  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:31.207901  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:31.707369  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:32.207474  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:32.707669  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:34.537430  184062 api_server.go:265] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:41:34.537458  184062 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:41:35.038283  184062 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:41:37.042201  184062 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:41:37.042236  184062 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:41:37.538456  184062 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:41:33.206932  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:33.707615  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:34.207588  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:34.707343  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:35.207738  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:35.707753  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:36.207289  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:36.707422  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:37.206900  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:37.706921  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:37.898212  184062 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:41:37.898252  184062 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:41:38.037684  184062 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:41:38.043388  184062 api_server.go:265] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:41:38.043425  184062 api_server.go:101] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:41:38.538438  184062 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0813 20:41:38.543888  184062 api_server.go:265] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0813 20:41:38.549802  184062 api_server.go:139] control plane version: v1.18.0
	I0813 20:41:38.549825  184062 api_server.go:129] duration metric: took 7.374222542s to wait for apiserver health ...
	I0813 20:41:38.549836  184062 cni.go:93] Creating CNI manager for ""
	I0813 20:41:38.549846  184062 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 20:41:38.551300  184062 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 20:41:38.551352  184062 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 20:41:38.558749  184062 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 20:41:38.587136  184062 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:41:38.601019  184062 system_pods.go:59] 5 kube-system pods found
	I0813 20:41:38.601052  184062 system_pods.go:61] "etcd-missing-upgrade-20210813203846-13784" [5570015b-654d-44a5-aceb-f2f71db96618] Running
	I0813 20:41:38.601059  184062 system_pods.go:61] "kube-apiserver-missing-upgrade-20210813203846-13784" [4119513a-e48d-4adb-85b3-03ad350c3de5] Running
	I0813 20:41:38.601065  184062 system_pods.go:61] "kube-controller-manager-missing-upgrade-20210813203846-13784" [930a43f0-ce68-40db-ab28-e14b4af6a3b0] Running
	I0813 20:41:38.601072  184062 system_pods.go:61] "kube-scheduler-missing-upgrade-20210813203846-13784" [41342111-9a1b-47da-b3c8-223164b6ae99] Running
	I0813 20:41:38.601082  184062 system_pods.go:61] "storage-provisioner" [403ebbd1-f1e0-4099-82c1-69f006219bca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0813 20:41:38.601092  184062 system_pods.go:74] duration metric: took 13.932803ms to wait for pod list to return data ...
	I0813 20:41:38.601105  184062 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:41:38.605131  184062 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:41:38.605159  184062 node_conditions.go:123] node cpu capacity is 8
	I0813 20:41:38.605176  184062 node_conditions.go:105] duration metric: took 4.066025ms to run NodePressure ...
	I0813 20:41:38.605199  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:41:38.885475  184062 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:41:38.910911  184062 ops.go:34] apiserver oom_adj: -16
	I0813 20:41:38.910930  184062 kubeadm.go:604] restartCluster took 10.817684675s
	I0813 20:41:38.910939  184062 kubeadm.go:392] StartCluster complete in 10.854631966s
	I0813 20:41:38.910958  184062 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:38.911044  184062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:38.912581  184062 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:41:38.913822  184062 kapi.go:59] client config for missing-upgrade-20210813203846-13784: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgra
de-20210813203846-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:39.428495  184062 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "missing-upgrade-20210813203846-13784" rescaled to 1
	I0813 20:41:39.428557  184062 start.go:226] Will wait 6m0s for node &{Name:m01 IP:192.168.67.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
	I0813 20:41:39.430596  184062 out.go:177] * Verifying Kubernetes components...
	I0813 20:41:39.428722  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:41:39.430671  184062 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:41:39.428932  184062 config.go:177] Loaded profile config "missing-upgrade-20210813203846-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0813 20:41:39.428950  184062 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0813 20:41:39.430825  184062 addons.go:59] Setting storage-provisioner=true in profile "missing-upgrade-20210813203846-13784"
	I0813 20:41:39.430859  184062 addons.go:135] Setting addon storage-provisioner=true in "missing-upgrade-20210813203846-13784"
	W0813 20:41:39.430868  184062 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:41:39.430899  184062 host.go:66] Checking if "missing-upgrade-20210813203846-13784" exists ...
	I0813 20:41:39.430830  184062 addons.go:59] Setting default-storageclass=true in profile "missing-upgrade-20210813203846-13784"
	I0813 20:41:39.430951  184062 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "missing-upgrade-20210813203846-13784"
	I0813 20:41:39.431243  184062 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}
	I0813 20:41:39.431437  184062 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}
	I0813 20:41:39.443249  184062 kubeadm.go:484] skip waiting for components based on config.
	I0813 20:41:39.443268  184062 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:41:39.446310  184062 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:41:39.446332  184062 node_conditions.go:123] node cpu capacity is 8
	I0813 20:41:39.446346  184062 node_conditions.go:105] duration metric: took 3.071171ms to run NodePressure ...
	I0813 20:41:39.446362  184062 start.go:231] waiting for startup goroutines ...
	I0813 20:41:39.521751  184062 kapi.go:59] client config for missing-upgrade-20210813203846-13784: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgrade-20210813203846-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/missing-upgra
de-20210813203846-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:39.540414  184062 addons.go:135] Setting addon default-storageclass=true in "missing-upgrade-20210813203846-13784"
	W0813 20:41:39.540434  184062 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:41:39.540464  184062 host.go:66] Checking if "missing-upgrade-20210813203846-13784" exists ...
	I0813 20:41:39.540957  184062 cli_runner.go:115] Run: docker container inspect missing-upgrade-20210813203846-13784 --format={{.State.Status}}
	I0813 20:41:39.555874  184062 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:41:39.556037  184062 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:41:39.556057  184062 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:41:39.556125  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:39.608383  184062 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:41:39.608403  184062 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:41:39.608461  184062 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20210813203846-13784
	I0813 20:41:39.649586  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:39.679818  184062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/missing-upgrade-20210813203846-13784/id_rsa Username:docker}
	I0813 20:41:39.727560  184062 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.18.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:41:39.771497  184062 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:41:39.818354  184062 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:41:40.203499  184062 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0813 20:41:40.376414  184062 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:41:40.376442  184062 addons.go:344] enableAddons completed in 947.495829ms
	I0813 20:41:40.424682  184062 start.go:462] kubectl: 1.20.5, cluster: 1.18.0 (minor skew: 2)
	I0813 20:41:40.426545  184062 out.go:177] 
	W0813 20:41:40.426717  184062 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.18.0.
	I0813 20:41:40.428176  184062 out.go:177]   - Want kubectl v1.18.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:41:40.429893  184062 out.go:177] * Done! kubectl is now configured to use "missing-upgrade-20210813203846-13784" cluster and "" namespace by default
	I0813 20:41:38.207242  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:38.706959  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:39.207349  190586 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:39.272393  190586 api_server.go:70] duration metric: took 11.080852806s to wait for apiserver process to appear ...
	I0813 20:41:39.272415  190586 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:41:39.272425  190586 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:48 UTC. --
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.135173785Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.136754653Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139480430Z" level=info msg="Conmon does support the --sync option"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139554604Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139563583Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.144516509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.147006953Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.149207086Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160317327Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160348934Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558550091Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-ts9sl Namespace:kube-system ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 NetNS:/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558791773Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 13 20:40:53 pause-20210813203929-13784 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.306861254Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.450483242Z" level=info msg="Ran pod sandbox c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 with infra container: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.451659183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452327272Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452980775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.453580166Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.454314289Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466027097Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/passwd: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466066274Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/group: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.662676174Z" level=info msg="Created container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.663296728Z" level=info msg="Starting container: 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.673744233Z" level=info msg="Started container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	8422317486aff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   51 seconds ago       Exited              storage-provisioner       0                   c9be4b40ae287
	f5e960ccbf41e       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   58 seconds ago       Running             coredns                   0                   32623516945f8
	0e66c2b5613f5       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   About a minute ago   Running             kindnet-cni               0                   3a829ab2057cc
	15fb32d86d158       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   125d82aa8b508
	765e30beb45ae       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   eae2bc9a9df7c
	ecf109c279e47       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   3c556f4397e88
	de897ce9eab3c       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   4483c604f9ed1
	0a93b9e0c15af       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   436db4ab23452
	
	* 
	* ==> coredns [f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +2.362662] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[  +6.305228] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.592043] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.311286] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a d3 6c 1f a1 fb 08 06        ........l.....
	[Aug13 20:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth638a0651
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 5b 20 34 63 04 08 06        .......[ 4c...
	[ +15.906177] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.377154] cgroup: cgroup2: unknown option "nsdelegate"
	[  +4.439794] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.000008] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.270961] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 1d 9a 5f 02 4e 08 06        ......j.._.N..
	[ +14.030429] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9a8d7a44
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d6 75 db 96 9a 5d 08 06        .......u...]..
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.579695] cgroup: cgroup2: unknown option "nsdelegate"
	[ +32.689527] cgroup: cgroup2: unknown option "nsdelegate"
	[  +7.907139] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803] <==
	* 2021-08-13 20:40:16.242033 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:0 size:5" took too long (640.465675ms) to execute
	2021-08-13 20:40:16.242101 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" " with result "range_response_count:1 size:4260" took too long (641.294407ms) to execute
	2021-08-13 20:40:17.051976 W | etcdserver: request "header:<ID:8128006947642344446 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" mod_revision:289 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" value_size:3977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" > >>" with result "size:16" took too long (432.247936ms) to execute
	2021-08-13 20:40:18.660824 W | wal: sync duration of 1.725448216s, expected less than 1s
	2021-08-13 20:40:18.929173 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.00007561s) to execute
	WARNING: 2021/08/13 20:40:18 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-13 20:40:19.784013 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (3.144407226s) to execute
	2021-08-13 20:40:19.784107 W | etcdserver: request "header:<ID:8128006947642344449 > lease_revoke:<id:70cc7b413e210346>" with result "size:29" took too long (1.12306053s) to execute
	2021-08-13 20:40:19.784416 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (2.727675957s) to execute
	2021-08-13 20:40:19.784657 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (2.728835843s) to execute
	2021-08-13 20:40:19.790810 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813203929-13784\" " with result "range_response_count:1 size:3976" took too long (848.789811ms) to execute
	2021-08-13 20:40:24.423766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:25.164612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:35.163828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:45.164164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:55.164376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:41:39.384796 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true " with result "range_response_count:0 size:5" took too long (106.40081ms) to execute
	2021-08-13 20:41:39.392055 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true " with result "range_response_count:0 size:5" took too long (113.28513ms) to execute
	2021-08-13 20:41:39.394712 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (113.466551ms) to execute
	2021-08-13 20:41:39.395210 W | etcdserver: read-only range request "key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.642459ms) to execute
	2021-08-13 20:41:39.395944 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.724887ms) to execute
	2021-08-13 20:41:39.396062 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.818866ms) to execute
	2021-08-13 20:41:39.396181 W | etcdserver: read-only range request "key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.857305ms) to execute
	2021-08-13 20:41:39.396324 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.898932ms) to execute
	2021-08-13 20:41:39.396475 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (114.941075ms) to execute
	
	* 
	* ==> kernel <==
	*  20:41:58 up  1:24,  0 users,  load average: 5.87, 3.78, 2.05
	Linux pause-20210813203929-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f] <==
	* Trace[1263737565]: ---"About to write a response" 852ms (20:40:00.793)
	Trace[1263737565]: [852.227308ms] [852.227308ms] END
	I0813 20:40:23.498807       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:40:23.848939       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:40:39.165407       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:40:39.165451       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:40:39.165458       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:41:39.277412       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:41:39.277465       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:41:39.277475       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 20:41:39.278238       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0813 20:41:39.278305       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:41:39.278321       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:41:39.279137       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0813 20:41:39.279183       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:41:39.279488       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:41:39.279566       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0813 20:41:39.279600       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:41:39.285120       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:41:39.287182       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:41:39.291151       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:41:39.292938       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:41:40.026569       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:41:40.026653       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:41:40.028222       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	
	* 
	* ==> kube-controller-manager [de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637] <==
	* I0813 20:40:23.507939       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pjb6w"
	I0813 20:40:23.517415       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k8wlb"
	E0813 20:40:23.548254       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"1eae9bdb-1aea-4c39-a2f9-a9df683878b4", ResourceVersion:"265", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484003, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00175a8d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00175a8e8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019c5800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a918), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a930), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5820)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5860)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00048fc20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001baacb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000c8a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000ae2ff0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001baad00)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:40:23.661112       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.661142       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:23.667920       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.851329       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:23.871778       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:23.965355       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ncl4r"
	I0813 20:40:23.980674       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ts9sl"
	I0813 20:40:24.022166       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-ncl4r"
	E0813 20:41:38.681456       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-13784: Put "https://192.168.49.2:8443/api/v1/nodes/pause-20210813203929-13784/status": http2: client connection lost
	W0813 20:41:38.681564       1 garbagecollector.go:705] failed to discover preferred resources: Get "https://192.168.49.2:8443/api?timeout=32s": http2: client connection lost
	E0813 20:41:38.681626       1 resource_quota_controller.go:409] failed to discover resources: Get "https://192.168.49.2:8443/api?timeout=32s": http2: client connection lost
	I0813 20:41:40.051611       1 event.go:291] "Event occurred" object="pause-20210813203929-13784" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node pause-20210813203929-13784 status is now: NodeNotReady"
	I0813 20:41:40.057616       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210813203929-13784" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.070599       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20210813203929-13784" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.082148       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210813203929-13784" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.091913       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210813203929-13784" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.114978       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-pjb6w" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.134563       1 event.go:291] "Event occurred" object="kube-system/kindnet-k8wlb" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.140428       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-ts9sl" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.171387       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0813 20:41:40.171445       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e] <==
	* I0813 20:40:24.404639       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:40:24.404699       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:40:24.404733       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:24.484285       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:24.484325       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:24.484338       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:24.484352       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:24.484722       1 server.go:643] Version: v1.21.3
	I0813 20:40:24.485344       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:24.485368       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:40:24.485394       1 config.go:315] Starting service config controller
	I0813 20:40:24.485398       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 20:40:24.494950       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:24.496346       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:24.585594       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:24.585676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662] <==
	* E0813 20:40:00.668147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:00.668368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.668713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:00.669360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:00.669416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669423       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:00.669448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.680678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:00.680761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:00.680779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:01.515088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:01.520824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:01.529811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:01.530681       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:01.534900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:01.603579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:01.619709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:02.065154       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0813 20:41:17.023801       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	I0813 20:41:28.832649       1 trace.go:205] Trace[1216246411]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (13-Aug-2021 20:41:18.831) (total time: 10001ms):
	Trace[1216246411]: [10.001010096s] [10.001010096s] END
	E0813 20:41:28.832675       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=33": net/http: TLS handshake timeout
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:58 UTC. --
	Aug 13 20:41:40 pause-20210813203929-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899419    4773 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:false CgroupRoot: CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899461    4773 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899476    4773 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899484    4773 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899624    4773 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899656    4773 remote_runtime.go:62] parsed scheme: ""
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899665    4773 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899704    4773 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899714    4773 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899825    4773 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899843    4773 remote_image.go:50] parsed scheme: ""
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899850    4773 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899864    4773 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899870    4773 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899968    4773 kubelet.go:404] "Attempting to sync node with API server"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899986    4773 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.900011    4773 kubelet.go:283] "Adding apiserver pod source"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.900026    4773 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.912102    4773 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="cri-o" version="1.20.3" apiVersion="v1alpha1"
	Aug 13 20:41:45 pause-20210813203929-13784 kubelet[4773]: E0813 20:41:45.139963    4773 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 13 20:41:45 pause-20210813203929-13784 kubelet[4773]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 13 20:41:45 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:45.140576    4773 server.go:1190] "Started kubelet"
	Aug 13 20:41:45 pause-20210813203929-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:41:45 pause-20210813203929-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b] <==
	* 
	goroutine 111 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc0004b4b90, 0x3)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0004b4b80)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00013a4e0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000440c80, 0x18e5530, 0xc000046100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005e7200)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005e7200, 0x18b3d60, 0xc000272000, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005e7200, 0x3b9aca00, 0x0, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0005e7200, 0x3b9aca00, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:41:58.354329  199805 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210813203929-13784
helpers_test.go:236: (dbg) docker inspect pause-20210813203929-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860",
	        "Created": "2021-08-13T20:39:31.372712772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:39:31.872578968Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/hosts",
	        "LogPath": "/var/lib/docker/containers/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860/ce53ded591b3e221b3a4aa5301379593510385bfe1400b1cd038046eb2ca0860-json.log",
	        "Name": "/pause-20210813203929-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210813203929-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210813203929-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/merged",
	                "UpperDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/diff",
	                "WorkDir": "/var/lib/docker/overlay2/169548db80fecab77741cdcc6d7cdfe984e5f91c59fffbe8650a10183c050962/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210813203929-13784",
	                "Source": "/var/lib/docker/volumes/pause-20210813203929-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210813203929-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "name.minikube.sigs.k8s.io": "pause-20210813203929-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a821792d507c6dabf086e5652e018123e85e4b030464132aafdef8bc15a9d200",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a821792d507c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210813203929-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ce53ded591b3"
	                    ],
	                    "NetworkID": "a8af35fe90fb5b850638bd77da889b067a8390ebee6680d76e896390e70a0e9e",
	                    "EndpointID": "0b310d5a393fb3e0184bcf23f10e5a3746cbeb23b4b202e9e5c6f681f15cdcfa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813203929-13784 -n pause-20210813203929-13784: exit status 2 (347.865786ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813203929-13784 logs -n 25: exit status 110 (10.974915837s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:10 UTC | Fri, 13 Aug 2021 20:37:47 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --memory=2048 --driver=docker             |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:37:48 UTC | Fri, 13 Aug 2021 20:37:48 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --cancel-scheduled                        |                                           |         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:00 UTC | Fri, 13 Aug 2021 20:38:26 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	|         | --schedule 5s                             |                                           |         |         |                               |                               |
	| delete  | -p                                        | scheduled-stop-20210813203710-13784       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:28 UTC | Fri, 13 Aug 2021 20:38:33 UTC |
	|         | scheduled-stop-20210813203710-13784       |                                           |         |         |                               |                               |
	| delete  | -p                                        | insufficient-storage-20210813203833-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:40 UTC | Fri, 13 Aug 2021 20:38:46 UTC |
	|         | insufficient-storage-20210813203833-13784 |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:39:26 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	|         | --memory=2048 --force-systemd             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210813203846-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:26 UTC | Fri, 13 Aug 2021 20:39:29 UTC |
	|         | force-systemd-flag-20210813203846-13784   |                                           |         |         |                               |                               |
	| start   | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:49 UTC | Fri, 13 Aug 2021 20:39:31 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=5 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210813203849-13784    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:31 UTC | Fri, 13 Aug 2021 20:39:35 UTC |
	|         | force-systemd-env-20210813203849-13784    |                                           |         |         |                               |                               |
	| start   | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:38:46 UTC | Fri, 13 Aug 2021 20:40:06 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                    |                                           |         |         |                               |                               |
	|         | --memory=2048 --wait=true                 |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | offline-crio-20210813203846-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:06 UTC | Fri, 13 Aug 2021 20:40:09 UTC |
	|         | offline-crio-20210813203846-13784         |                                           |         |         |                               |                               |
	| delete  | -p                                        | kubenet-20210813204009-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:09 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | kubenet-20210813204009-13784              |                                           |         |         |                               |                               |
	| delete  | -p                                        | flannel-20210813204010-13784              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:10 UTC | Fri, 13 Aug 2021 20:40:10 UTC |
	|         | flannel-20210813204010-13784              |                                           |         |         |                               |                               |
	| delete  | -p false-20210813204010-13784             | false-20210813204010-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:11 UTC | Fri, 13 Aug 2021 20:40:11 UTC |
	| start   | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:35 UTC | Fri, 13 Aug 2021 20:40:23 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                 |                                           |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15             |                                           |         |         |                               |                               |
	|         | --apiserver-names=localhost               |                                           |         |         |                               |                               |
	|         | --apiserver-names=www.google.com          |                                           |         |         |                               |                               |
	|         | --apiserver-port=8555                     |                                           |         |         |                               |                               |
	|         | --driver=docker                           |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| -p      | cert-options-20210813203935-13784         | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:23 UTC | Fri, 13 Aug 2021 20:40:24 UTC |
	|         | ssh openssl x509 -text -noout -in         |                                           |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210813203935-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:24 UTC | Fri, 13 Aug 2021 20:40:27 UTC |
	|         | cert-options-20210813203935-13784         |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:39:29 UTC | Fri, 13 Aug 2021 20:40:51 UTC |
	|         | --memory=2048                             |                                           |         |         |                               |                               |
	|         | --install-addons=false                    |                                           |         |         |                               |                               |
	|         | --wait=all --driver=docker                |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:51 UTC | Fri, 13 Aug 2021 20:40:57 UTC |
	|         | --alsologtostderr                         |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| start   | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:27 UTC | Fri, 13 Aug 2021 20:41:10 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	|         | --memory=2200                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0              |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| stop    | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:10 UTC | Fri, 13 Aug 2021 20:41:13 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	| unpause | -p pause-20210813203929-13784             | pause-20210813203929-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:38 UTC | Fri, 13 Aug 2021 20:41:39 UTC |
	|         | --alsologtostderr -v=5                    |                                           |         |         |                               |                               |
	| start   | -p                                        | missing-upgrade-20210813203846-13784      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:42 UTC | Fri, 13 Aug 2021 20:41:40 UTC |
	|         | missing-upgrade-20210813203846-13784      |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	| delete  | -p                                        | missing-upgrade-20210813203846-13784      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:40 UTC | Fri, 13 Aug 2021 20:41:43 UTC |
	|         | missing-upgrade-20210813203846-13784      |                                           |         |         |                               |                               |
	| start   | -p                                        | kubernetes-upgrade-20210813204027-13784   | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:13 UTC | Fri, 13 Aug 2021 20:41:53 UTC |
	|         | kubernetes-upgrade-20210813204027-13784   |                                           |         |         |                               |                               |
	|         | --memory=2200                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0         |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker    |                                           |         |         |                               |                               |
	|         | --container-runtime=crio                  |                                           |         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:41:54
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:41:54.110020  202395 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:41:54.110115  202395 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:54.110121  202395 out.go:311] Setting ErrFile to fd 2...
	I0813 20:41:54.110126  202395 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:41:54.110272  202395 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:41:54.110576  202395 out.go:305] Setting JSON to false
	I0813 20:41:54.148451  202395 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5077,"bootTime":1628882237,"procs":271,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:41:54.148567  202395 start.go:121] virtualization: kvm guest
	I0813 20:41:54.151177  202395 out.go:177] * [kubernetes-upgrade-20210813204027-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:41:54.152568  202395 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:41:54.151333  202395 notify.go:169] Checking for updates...
	I0813 20:41:54.154035  202395 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:41:54.155408  202395 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:41:54.156712  202395 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:41:54.157159  202395 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204027-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:41:54.157596  202395 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:41:54.213034  202395 docker.go:132] docker version: linux-19.03.15
	I0813 20:41:54.213114  202395 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:54.360901  202395 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-13 20:41:54.250183805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:41:54.360990  202395 docker.go:244] overlay module found
	I0813 20:41:55.178065  202395 out.go:177] * Using the docker driver based on existing profile
	I0813 20:41:55.178101  202395 start.go:278] selected driver: docker
	I0813 20:41:55.178110  202395 start.go:751] validating driver "docker" against &{Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volum
esnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:55.178251  202395 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:41:55.178304  202395 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:55.178331  202395 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:41:55.182369  202395 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:55.183594  202395 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:41:55.304709  202395 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:true NGoroutines:72 SystemTime:2021-08-13 20:41:55.22674762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:41:55.304883  202395 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:41:55.304920  202395 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:41:55.306628  202395 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:41:55.306754  202395 cni.go:93] Creating CNI manager for ""
	I0813 20:41:55.306769  202395 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:41:55.306782  202395 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComp
onents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:55.308803  202395 out.go:177] * Starting control plane node kubernetes-upgrade-20210813204027-13784 in cluster kubernetes-upgrade-20210813204027-13784
	I0813 20:41:55.308839  202395 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:41:55.310193  202395 out.go:177] * Pulling base image ...
	I0813 20:41:55.310221  202395 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:41:55.310261  202395 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:41:55.310272  202395 cache.go:56] Caching tarball of preloaded images
	I0813 20:41:55.310314  202395 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:41:55.310439  202395 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:41:55.310457  202395 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 20:41:55.310624  202395 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/config.json ...
	I0813 20:41:55.408971  202395 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:41:55.409003  202395 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:41:55.409019  202395 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:41:55.409080  202395 start.go:313] acquiring machines lock for kubernetes-upgrade-20210813204027-13784: {Name:mk867fd1b3701cb21737f832aa092309ed957057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:41:55.409209  202395 start.go:317] acquired machines lock for "kubernetes-upgrade-20210813204027-13784" in 84.777µs
	I0813 20:41:55.409235  202395 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:41:55.409243  202395 fix.go:55] fixHost starting: 
	I0813 20:41:55.409548  202395 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210813204027-13784 --format={{.State.Status}}
	I0813 20:41:55.450103  202395 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210813204027-13784: state=Running err=<nil>
	W0813 20:41:55.450159  202395 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:41:55.452510  202395 out.go:177] * Updating the running docker "kubernetes-upgrade-20210813204027-13784" container ...
	I0813 20:41:55.452561  202395 machine.go:88] provisioning docker machine ...
	I0813 20:41:55.452592  202395 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210813204027-13784"
	I0813 20:41:55.452674  202395 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:55.497726  202395 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:55.497897  202395 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:55.497913  202395 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210813204027-13784 && echo "kubernetes-upgrade-20210813204027-13784" | sudo tee /etc/hostname
	I0813 20:41:55.637807  202395 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210813204027-13784
	
	I0813 20:41:55.637893  202395 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:55.682813  202395 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:55.683052  202395 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:55.683097  202395 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210813204027-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210813204027-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210813204027-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:41:55.812248  202395 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:41:55.812282  202395 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:41:55.812318  202395 ubuntu.go:177] setting up certificates
	I0813 20:41:55.812331  202395 provision.go:83] configureAuth start
	I0813 20:41:55.812394  202395 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:55.867060  202395 provision.go:138] copyHostCerts
	I0813 20:41:55.867136  202395 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:41:55.867149  202395 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:41:55.867205  202395 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:41:55.867279  202395 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:41:55.867290  202395 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:41:55.867320  202395 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:41:55.867370  202395 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:41:55.867378  202395 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:41:55.867397  202395 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:41:55.867451  202395 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210813204027-13784 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20210813204027-13784]
	I0813 20:41:55.935747  202395 provision.go:172] copyRemoteCerts
	I0813 20:41:55.935811  202395 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:41:55.935869  202395 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:55.980999  202395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:56.070147  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0813 20:41:56.087649  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:41:56.104587  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:41:56.123354  202395 provision.go:86] duration metric: configureAuth took 311.007259ms
	I0813 20:41:56.123376  202395 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:41:56.123623  202395 config.go:177] Loaded profile config "kubernetes-upgrade-20210813204027-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:41:56.123740  202395 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:56.163009  202395 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:56.163151  202395 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32914 <nil> <nil>}
	I0813 20:41:56.163166  202395 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:41:56.585306  202395 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:41:56.585347  202395 machine.go:91] provisioned docker machine in 1.132773698s
	I0813 20:41:56.585363  202395 start.go:267] post-start starting for "kubernetes-upgrade-20210813204027-13784" (driver="docker")
	I0813 20:41:56.585373  202395 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:41:56.585447  202395 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:41:56.585543  202395 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:56.629226  202395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:56.724672  202395 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:41:56.727336  202395 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:41:56.727358  202395 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:41:56.727366  202395 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:41:56.727372  202395 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:41:56.727381  202395 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:41:56.727424  202395 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:41:56.727535  202395 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:41:56.727637  202395 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:41:56.733725  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:56.750437  202395 start.go:270] post-start completed in 165.059641ms
	I0813 20:41:56.750489  202395 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:41:56.750529  202395 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:56.791778  202395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:56.878184  202395 fix.go:57] fixHost completed within 1.468931926s
	I0813 20:41:56.878210  202395 start.go:80] releasing machines lock for "kubernetes-upgrade-20210813204027-13784", held for 1.468988303s
	I0813 20:41:56.878294  202395 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:56.925081  202395 ssh_runner.go:149] Run: systemctl --version
	I0813 20:41:56.925131  202395 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:41:56.925141  202395 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:56.925191  202395 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210813204027-13784
	I0813 20:41:56.970019  202395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:56.974010  202395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813204027-13784/id_rsa Username:docker}
	I0813 20:41:57.203291  202395 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:41:57.214554  202395 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:41:57.223954  202395 docker.go:153] disabling docker service ...
	I0813 20:41:57.223999  202395 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:41:57.232883  202395 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:41:57.241633  202395 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:41:57.346262  202395 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:41:57.431247  202395 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:41:57.440163  202395 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:41:57.451574  202395 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:41:57.458909  202395 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:41:57.458935  202395 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:41:57.466224  202395 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:41:57.471926  202395 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:41:57.471966  202395 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:41:57.478378  202395 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:41:57.484263  202395 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:41:57.564186  202395 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:41:57.573033  202395 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:41:57.573105  202395 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:41:57.576134  202395 start.go:413] Will wait 60s for crictl version
	I0813 20:41:57.576180  202395 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:41:57.603835  202395 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:41:57.603952  202395 ssh_runner.go:149] Run: crio --version
	I0813 20:41:57.667650  202395 ssh_runner.go:149] Run: crio --version
	I0813 20:41:53.529941  201437 out.go:177] * Restarting existing docker container for "stopped-upgrade-20210813204011-13784" ...
	I0813 20:41:53.530023  201437 cli_runner.go:115] Run: docker start stopped-upgrade-20210813204011-13784
	I0813 20:41:54.166141  201437 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210813204011-13784 --format={{.State.Status}}
	I0813 20:41:54.212494  201437 kic.go:420] container "stopped-upgrade-20210813204011-13784" state is running.
	I0813 20:41:54.319708  201437 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210813204011-13784
	I0813 20:41:54.363379  201437 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813204011-13784/config.json ...
	I0813 20:41:54.820493  201437 machine.go:88] provisioning docker machine ...
	I0813 20:41:54.820567  201437 ubuntu.go:169] provisioning hostname "stopped-upgrade-20210813204011-13784"
	I0813 20:41:54.820637  201437 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813204011-13784
	I0813 20:41:54.862770  201437 main.go:130] libmachine: Using SSH client type: native
	I0813 20:41:54.862963  201437 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32920 <nil> <nil>}
	I0813 20:41:54.862979  201437 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210813204011-13784 && echo "stopped-upgrade-20210813204011-13784" | sudo tee /etc/hostname
	I0813 20:41:54.863538  201437 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50386->127.0.0.1:32920: read: connection reset by peer
	I0813 20:41:57.731872  202395 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0813 20:41:57.731948  202395 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210813204027-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:41:57.772436  202395 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:41:57.775944  202395 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:41:57.775995  202395 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:57.803138  202395 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:41:57.803156  202395 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:41:57.803198  202395 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:41:57.825699  202395 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:41:57.825726  202395 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:41:57.825795  202395 ssh_runner.go:149] Run: crio config
	I0813 20:41:57.892708  202395 cni.go:93] Creating CNI manager for ""
	I0813 20:41:57.892731  202395 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:41:57.892740  202395 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:41:57.892752  202395 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210813204027-13784 NodeName:kubernetes-upgrade-20210813204027-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:sys
temd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:41:57.892885  202395 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-20210813204027-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:41:57.892963  202395 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-20210813204027-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:41:57.893014  202395 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 20:41:57.900265  202395 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:41:57.900321  202395 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:41:57.906675  202395 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (575 bytes)
	I0813 20:41:57.918155  202395 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 20:41:57.929163  202395 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I0813 20:41:57.940413  202395 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:41:57.943132  202395 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784 for IP: 192.168.58.2
	I0813 20:41:57.943179  202395 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:41:57.943193  202395 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:41:57.943240  202395 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/client.key
	I0813 20:41:57.943258  202395 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/apiserver.key.cee25041
	I0813 20:41:57.943271  202395 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/proxy-client.key
	I0813 20:41:57.943357  202395 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:41:57.943403  202395 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:41:57.943414  202395 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:41:57.943434  202395 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:41:57.943462  202395 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:41:57.943484  202395 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:41:57.943524  202395 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:41:57.944487  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:41:57.960332  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:41:57.976825  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:41:57.992158  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:41:58.007610  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:41:58.022644  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:41:58.039100  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:41:58.055380  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:41:58.071362  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:41:58.087263  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:41:58.102680  202395 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:41:58.119351  202395 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:41:58.130834  202395 ssh_runner.go:149] Run: openssl version
	I0813 20:41:58.135346  202395 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:41:58.142096  202395 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:41:58.144814  202395 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:41:58.144859  202395 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:41:58.149232  202395 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:41:58.155264  202395 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:41:58.161978  202395 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:58.164651  202395 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:58.164706  202395 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:41:58.169036  202395 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:41:58.174999  202395 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:41:58.181800  202395 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:41:58.185319  202395 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:41:58.185366  202395 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:41:58.190221  202395 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:41:58.196779  202395 kubeadm.go:390] StartCluster: {Name:kubernetes-upgrade-20210813204027-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210813204027-13784 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] Cu
stomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:41:58.196905  202395 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:41:58.196971  202395 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:58.224810  202395 cri.go:76] found id: "1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93"
	I0813 20:41:58.224834  202395 cri.go:76] found id: "b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c"
	I0813 20:41:58.224841  202395 cri.go:76] found id: "d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4"
	I0813 20:41:58.224847  202395 cri.go:76] found id: "ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52"
	I0813 20:41:58.224850  202395 cri.go:76] found id: ""
	I0813 20:41:58.224896  202395 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 20:41:58.254321  202395 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93","pid":1494,"status":"running","bundle":"/run/containers/storage/overlay-containers/1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93/userdata","rootfs":"/var/lib/containers/storage/overlay/3432014e2ad4ecce3fb56f6ea70a21aedf75d0236d5770295b47fc25f7d6b249/merged","created":"2021-08-13T20:41:47.905848329Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1acd865","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1acd865\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessageP
olicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:41:47.76553093Z","io.kubernetes.cri-o.Image":"k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.0-0","io.kubernetes.cri-o.ImageRef":"0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-20210813204027-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"3983e72ed643c5590331ea44eb01c720\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-20210813204027-13784_3983e72ed643c5590331ea44eb01c720/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.ku
bernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3432014e2ad4ecce3fb56f6ea70a21aedf75d0236d5770295b47fc25f7d6b249/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-20210813204027-13784_kube-system_3983e72ed643c5590331ea44eb01c720_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-20210813204027-13784_kube-system_3983e72ed643c5590331ea44eb01c720_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/3983e72ed643c5590331ea44eb01c720/etc-hosts\",\"readonly\":false},{\"container_path\":\"/
dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/3983e72ed643c5590331ea44eb01c720/containers/etcd/71e2d9c3\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-20210813204027-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"3983e72ed643c5590331ea44eb01c720","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3983e72ed643c5590331ea44eb01c720","kubernetes.io/config.seen":"2021-08-13T20:41:33.232606721Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bb2b59214f892e4ef22a12f059
7b205c22105cbdcb6570aae863d23b0ae387f","pid":1123,"status":"running","bundle":"/run/containers/storage/overlay-containers/5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f/userdata","rootfs":"/var/lib/containers/storage/overlay/094b480ac051af9fcf88bea94dbc17f55bd9af963baf971882f662eaf87149c3/merged","created":"2021-08-13T20:41:38.309965888Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"3983e72ed643c5590331ea44eb01c720\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.58.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-13T20:41:33.232606721Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-20210813204027-13784_kube-system_3983e72ed643c5590331ea44eb01
c720_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:41:38.175773647Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-20210813204027-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.1","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-20210813204027-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-20210813204027-13784\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"3983e72ed643c5590331ea44eb01c720\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-20210813204027-13784_3983e72ed643c5590331ea44eb01c720/5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d2
3b0ae387f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-20210813204027-13784\",\"uid\":\"3983e72ed643c5590331ea44eb01c720\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/094b480ac051af9fcf88bea94dbc17f55bd9af963baf971882f662eaf87149c3/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-20210813204027-13784_kube-system_3983e72ed643c5590331ea44eb01c720_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kuberne
tes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-20210813204027-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"3983e72ed643c5590331ea44eb01c720","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3983e72ed643c5590331ea44eb01c720","kubernetes.io/config.seen":"2021-08-13T20:41:33.232606721Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7","pid":1109,"status":"running","bundle":"/run/containers/storage/overlay-containers/733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7/userdata","rootfs":"/var/lib/containers/storage/overlay/64d34f83c79cdcecb9d54e0f3d48124d0d52c9294a5a87265b95849a2b820fd1/me
rged","created":"2021-08-13T20:41:38.309819853Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"cb1693076a7758d663421219a28460b6\",\"kubernetes.io/config.seen\":\"2021-08-13T20:41:33.232628681Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-20210813204027-13784_kube-system_cb1693076a7758d663421219a28460b6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:41:38.171144726Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-20210813204027-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d
7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-20210813204027-13784","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"cb1693076a7758d663421219a28460b6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-20210813204027-13784\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-20210813204027-13784_cb1693076a7758d663421219a28460b6/733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-20210813204027-13784\",\"uid\":\"cb1693076a7758d663421219a28460b6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/64d34f83c79cdcecb9d54e0f3d48124d0d52c9294a5a87265b95849a2b820fd1/merge
d","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-20210813204027-13784_kube-system_cb1693076a7758d663421219a28460b6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-20210813204027-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cb1693076a7758d66342
1219a28460b6","kubernetes.io/config.hash":"cb1693076a7758d663421219a28460b6","kubernetes.io/config.seen":"2021-08-13T20:41:33.232628681Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af","pid":1131,"status":"running","bundle":"/run/containers/storage/overlay-containers/8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af/userdata","rootfs":"/var/lib/containers/storage/overlay/188810c9cac3c017f2b6d7557c2fffcbb4591b9b3bba3b88952439d748fbf6b0/merged","created":"2021-08-13T20:41:38.358066205Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"d595db32b1952c0b6093124f2c1b832c\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.58.2:8443\",\"kubernetes.io/config.seen\
":\"2021-08-13T20:41:33.232625515Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-20210813204027-13784_kube-system_d595db32b1952c0b6093124f2c1b832c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:41:38.181901336Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-20210813204027-13784","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-20210813204027-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\
"d595db32b1952c0b6093124f2c1b832c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-20210813204027-13784\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-20210813204027-13784_d595db32b1952c0b6093124f2c1b832c/8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-20210813204027-13784\",\"uid\":\"d595db32b1952c0b6093124f2c1b832c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/188810c9cac3c017f2b6d7557c2fffcbb4591b9b3bba3b88952439d748fbf6b0/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-20210813204027-13784_kube-system_d595db32b1952c0b6093124f2c1b832c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","
io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-20210813204027-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"d595db32b1952c0b6093124f2c1b832c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"d595db32b1952c0b6093124f2c1b832c","kubernetes.io/config.seen":"2021-08-13T20:41:33.232625515Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMo
de":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c","pid":1248,"status":"running","bundle":"/run/containers/storage/overlay-containers/b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c/userdata","rootfs":"/var/lib/containers/storage/overlay/6c09952fe74c30487c60202b8cda99e69c2659bc8591aff27a09b045d6d4ba77/merged","created":"2021-08-13T20:41:38.689729034Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ed41f3b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ed41f3b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.t
erminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:41:38.488265698Z","io.kubernetes.cri-o.Image":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-20210813204027-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d595db32b1952c0b6093124f2c1b832c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-20210813204027-13784_d595db32b1952c0b6093124f2c1b832c/kube-apiserver/0.log","io.kube
rnetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c09952fe74c30487c60202b8cda99e69c2659bc8591aff27a09b045d6d4ba77/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-20210813204027-13784_kube-system_d595db32b1952c0b6093124f2c1b832c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-20210813204027-13784_kube-system_d595db32b1952c0b6093124f2c1b832c_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/
kubelet/pods/d595db32b1952c0b6093124f2c1b832c/containers/kube-apiserver/f3355163\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d595db32b1952c0b6093124f2c1b832c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-20210813204027-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d595db32b1952c0b6093124f2c1b832c","ku
beadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"d595db32b1952c0b6093124f2c1b832c","kubernetes.io/config.seen":"2021-08-13T20:41:33.232625515Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52","pid":1225,"status":"running","bundle":"/run/containers/storage/overlay-containers/ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52/userdata","rootfs":"/var/lib/containers/storage/overlay/478cc55d55887c968f7bbe778ab4a24c19cdc04f153550948bfa86419b0920f6/merged","created":"2021-08-13T20:41:38.666392086Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ca706355","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessag
ePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ca706355\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:41:38.464547794Z","io.kubernetes.cri-o.Image":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-co
ntroller-manager-kubernetes-upgrade-20210813204027-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"315667cabb3a6f0fb5d6f2790209417c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-20210813204027-13784_315667cabb3a6f0fb5d6f2790209417c/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/478cc55d55887c968f7bbe778ab4a24c19cdc04f153550948bfa86419b0920f6/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-20210813204027-13784_kube-system_315667cabb3a6f0fb5d6f2790209417c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427","io.kubernetes.cri-o.
SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-20210813204027-13784_kube-system_315667cabb3a6f0fb5d6f2790209417c_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/315667cabb3a6f0fb5d6f2790209417c/containers/kube-controller-manager/fd638e87\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/315667cabb3a6f0fb5d6f2790209417c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-ce
rtificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-20210813204027-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"315667cabb3a6f0fb5d6f2790209417c","kubernetes.io/config.hash":"315667cabb3a6f0fb5d6f2790209417c","kubernetes.io/config.seen":"2021-08-13T20:41:33.232627179Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVer
sion":"1.0.2-dev","id":"d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4","pid":1237,"status":"running","bundle":"/run/containers/storage/overlay-containers/d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4/userdata","rootfs":"/var/lib/containers/storage/overlay/7d48369df464efa457629a3885031d25f9707a726c9c26a7e53ad37bd303a94c/merged","created":"2021-08-13T20:41:38.669573715Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fe75c9af","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fe75c9af\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePe
riod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T20:41:38.48672322Z","io.kubernetes.cri-o.Image":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.22.0-rc.0","io.kubernetes.cri-o.ImageRef":"7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-20210813204027-13784\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cb1693076a7758d663421219a28460b6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-20210813204027-13784_cb1693076a7758d663421219a28460b6/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-
o.MountPoint":"/var/lib/containers/storage/overlay/7d48369df464efa457629a3885031d25f9707a726c9c26a7e53ad37bd303a94c/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-20210813204027-13784_kube-system_cb1693076a7758d663421219a28460b6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-20210813204027-13784_kube-system_cb1693076a7758d663421219a28460b6_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb1693076a7758d663421219a28460b6/etc-hosts\",\"readonly\":false},{\"con
tainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb1693076a7758d663421219a28460b6/containers/kube-scheduler/c65dbee4\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-20210813204027-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb1693076a7758d663421219a28460b6","kubernetes.io/config.hash":"cb1693076a7758d663421219a28460b6","kubernetes.io/config.seen":"2021-08-13T20:41:33.232628681Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427","pid":1116,"status":"running","bundle":"/run/containers/storage/overlay-containers/efa329e948d4291e844cf8991
36449df31faf01206f66528886b0ac3a3fac427/userdata","rootfs":"/var/lib/containers/storage/overlay/05ccf6668f62ca9131331912a70a244e5c67f47c7c2da8707025cad1359721a9/merged","created":"2021-08-13T20:41:38.310120417Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"315667cabb3a6f0fb5d6f2790209417c\",\"kubernetes.io/config.seen\":\"2021-08-13T20:41:33.232627179Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-20210813204027-13784_kube-system_315667cabb3a6f0fb5d6f2790209417c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T20:41:38.184039638Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-20210813204027-13784","i
o.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-20210813204027-13784","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-20210813204027-13784\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"315667cabb3a6f0fb5d6f2790209417c\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-20210813204027-13784_315667cabb3a6f0fb5d6f2790209417c/efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-20210813204027
-13784\",\"uid\":\"315667cabb3a6f0fb5d6f2790209417c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/05ccf6668f62ca9131331912a70a244e5c67f47c7c2da8707025cad1359721a9/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-20210813204027-13784_kube-system_315667cabb3a6f0fb5d6f2790209417c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/efa329e948d4291
e844cf899136449df31faf01206f66528886b0ac3a3fac427/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-20210813204027-13784","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"315667cabb3a6f0fb5d6f2790209417c","kubernetes.io/config.hash":"315667cabb3a6f0fb5d6f2790209417c","kubernetes.io/config.seen":"2021-08-13T20:41:33.232627179Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0813 20:41:58.254869  202395 cri.go:113] list returned 8 containers
	I0813 20:41:58.254883  202395 cri.go:116] container: {ID:1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93 Status:running}
	I0813 20:41:58.254912  202395 cri.go:122] skipping {1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93 running}: state = "running", want "paused"
	I0813 20:41:58.254927  202395 cri.go:116] container: {ID:5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f Status:running}
	I0813 20:41:58.254937  202395 cri.go:118] skipping 5bb2b59214f892e4ef22a12f0597b205c22105cbdcb6570aae863d23b0ae387f - not in ps
	I0813 20:41:58.254946  202395 cri.go:116] container: {ID:733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7 Status:running}
	I0813 20:41:58.254957  202395 cri.go:118] skipping 733aa528df0007a113e9f981ce83c4c26f12db1ddce3a054479660e6f3bad1d7 - not in ps
	I0813 20:41:58.254965  202395 cri.go:116] container: {ID:8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af Status:running}
	I0813 20:41:58.254974  202395 cri.go:118] skipping 8a1abfff1934dac5835c8fa966c5d5d777c16513b99f7278e89793f9259240af - not in ps
	I0813 20:41:58.254983  202395 cri.go:116] container: {ID:b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c Status:running}
	I0813 20:41:58.254994  202395 cri.go:122] skipping {b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c running}: state = "running", want "paused"
	I0813 20:41:58.255006  202395 cri.go:116] container: {ID:ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52 Status:running}
	I0813 20:41:58.255016  202395 cri.go:122] skipping {ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52 running}: state = "running", want "paused"
	I0813 20:41:58.255026  202395 cri.go:116] container: {ID:d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4 Status:running}
	I0813 20:41:58.255039  202395 cri.go:122] skipping {d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4 running}: state = "running", want "paused"
	I0813 20:41:58.255048  202395 cri.go:116] container: {ID:efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427 Status:running}
	I0813 20:41:58.255058  202395 cri.go:118] skipping efa329e948d4291e844cf899136449df31faf01206f66528886b0ac3a3fac427 - not in ps
	I0813 20:41:58.255105  202395 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:41:58.261882  202395 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:41:58.261906  202395 kubeadm.go:600] restartCluster start
	I0813 20:41:58.261961  202395 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:41:58.268005  202395 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:41:58.268823  202395 kubeconfig.go:93] found "kubernetes-upgrade-20210813204027-13784" server: "https://192.168.58.2:8443"
	I0813 20:41:58.269414  202395 kapi.go:59] client config for kubernetes-upgrade-20210813204027-13784: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813204027-13784/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kuberne
tes-upgrade-20210813204027-13784/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:41:58.271006  202395 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:41:58.277128  202395 api_server.go:164] Checking apiserver status ...
	I0813 20:41:58.277181  202395 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:41:58.293597  202395 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	I0813 20:41:58.300592  202395 api_server.go:180] apiserver freezer: "3:freezer:/docker/c400490c8acd6e57c2e9c41ced47db653ca6ae04576919f090f30833cae2bd15/system.slice/crio-b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c.scope"
	I0813 20:41:58.300654  202395 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/c400490c8acd6e57c2e9c41ced47db653ca6ae04576919f090f30833cae2bd15/system.slice/crio-b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c.scope/freezer.state
	I0813 20:41:58.306719  202395 api_server.go:202] freezer state: "THAWED"
	I0813 20:41:58.306748  202395 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0813 20:41:58.314819  202395 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0813 20:41:58.330535  202395 system_pods.go:86] 4 kube-system pods found
	I0813 20:41:58.330570  202395 system_pods.go:89] "coredns-fb8b8dccf-4v7wp" [c9483238-fc76-11eb-abf8-02421f46f1af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 20:41:58.330581  202395 system_pods.go:89] "kindnet-smvcw" [c9349808-fc76-11eb-abf8-02421f46f1af] Running
	I0813 20:41:58.330589  202395 system_pods.go:89] "kube-proxy-kctsg" [c9346b24-fc76-11eb-abf8-02421f46f1af] Running
	I0813 20:41:58.330595  202395 system_pods.go:89] "storage-provisioner" [c0ef813b-fc76-11eb-abf8-02421f46f1af] Running
	I0813 20:41:58.330606  202395 kubeadm.go:584] needs reconfigure: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0813 20:41:58.330623  202395 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:41:58.330633  202395 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:41:58.330693  202395 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:41:58.359004  202395 cri.go:76] found id: "1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93"
	I0813 20:41:58.359029  202395 cri.go:76] found id: "b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c"
	I0813 20:41:58.359036  202395 cri.go:76] found id: "d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4"
	I0813 20:41:58.359044  202395 cri.go:76] found id: "ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52"
	I0813 20:41:58.359051  202395 cri.go:76] found id: ""
	I0813 20:41:58.359057  202395 cri.go:221] Stopping containers: [1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93 b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4 ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52]
	I0813 20:41:58.359110  202395 ssh_runner.go:149] Run: which crictl
	I0813 20:41:58.362613  202395 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 1a3a79facf9d534a8ea69654a26a08036187d377a466c167609659bbe9332e93 b9e2f5040ebbe0147f14e65d434562bba6609708fba3a652c62011b5a1cd451c d7fc754d4817caed6eb13bffeab7c6d1a2184e32bb59c0de30d7664a06b599b4 ceba9df916be0f4dc00b71e193b80416dbe33274794cf57b34f2c597db67db52
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:41:59 UTC. --
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.135173785Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.136754653Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139480430Z" level=info msg="Conmon does support the --sync option"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139554604Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.139563583Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.144516509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.147006953Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.149207086Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160317327Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.160348934Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558550091Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-ts9sl Namespace:kube-system ID:32623516945f8e006b1accb6fceafbd0782d15abd6698f41a935ef01476eaf11 NetNS:/var/run/netns/03bf7689-f0fa-45ed-a585-bc9e50b977e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 20:40:53 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:53.558791773Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 13 20:40:53 pause-20210813203929-13784 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.306861254Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.450483242Z" level=info msg="Ran pod sandbox c9be4b40ae28787a401f895e24d3f2c31dd92e647d6c4d0d614fb45b73b7e654 with infra container: kube-system/storage-provisioner/POD" id=12bc34f8-3d66-4946-86fe-9921df7dd79a name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.451659183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452327272Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cd1659a5-00dc-4cd7-9cc0-8b6c33df5986 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.452980775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.453580166Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=47b73ecf-8491-4d4d-8ff9-a3c768300751 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.454314289Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466027097Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/passwd: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.466066274Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/407d57a696540996144e347218bab12407f9efa27d5d4ef3bd18bbf8c81238e5/merged/etc/group: no such file or directory"
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.662676174Z" level=info msg="Created container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=858bd268-eef6-4b8f-b251-947d2c89abee name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.663296728Z" level=info msg="Starting container: 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:40:56 pause-20210813203929-13784 crio[2896]: time="2021-08-13 20:40:56.673744233Z" level=info msg="Started container 8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b: kube-system/storage-provisioner/storage-provisioner" id=d39ba6ef-e53e-4742-939f-4d92b33a827c name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	8422317486aff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       0                   c9be4b40ae287
	f5e960ccbf41e       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   About a minute ago   Running             coredns                   0                   32623516945f8
	0e66c2b5613f5       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   About a minute ago   Running             kindnet-cni               0                   3a829ab2057cc
	15fb32d86d158       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   125d82aa8b508
	765e30beb45ae       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   2 minutes ago        Running             etcd                      0                   eae2bc9a9df7c
	ecf109c279e47       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   2 minutes ago        Running             kube-scheduler            0                   3c556f4397e88
	de897ce9eab3c       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   2 minutes ago        Running             kube-controller-manager   0                   4483c604f9ed1
	0a93b9e0c15af       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   2 minutes ago        Running             kube-apiserver            0                   436db4ab23452
	
	* 
	* ==> coredns [f5e960ccbf41efa9aadb572da73ead4c18254a6ce26d90ee8afffd606dac44f5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +2.362662] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	[  +6.305228] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.592043] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 4a 79 66 2b 06 d9 08 06        ......Jyf+....
	[  +0.311286] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a d3 6c 1f a1 fb 08 06        ........l.....
	[Aug13 20:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth638a0651
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 5b 20 34 63 04 08 06        .......[ 4c...
	[ +15.906177] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.377154] cgroup: cgroup2: unknown option "nsdelegate"
	[  +4.439794] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.000008] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 7e 12 eb c5 fa 64 08 06        ......~....d..
	[  +0.270961] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6a 1d 9a 5f 02 4e 08 06        ......j.._.N..
	[ +14.030429] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9a8d7a44
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d6 75 db 96 9a 5d 08 06        .......u...]..
	[Aug13 20:41] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.579695] cgroup: cgroup2: unknown option "nsdelegate"
	[ +32.689527] cgroup: cgroup2: unknown option "nsdelegate"
	[  +7.907139] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [765e30beb45aeca99130c1b9ff5ef0e70463a2c646a848b54adc8a0f42bd6803] <==
	* 2021-08-13 20:40:16.242033 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:0 size:5" took too long (640.465675ms) to execute
	2021-08-13 20:40:16.242101 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" " with result "range_response_count:1 size:4260" took too long (641.294407ms) to execute
	2021-08-13 20:40:17.051976 W | etcdserver: request "header:<ID:8128006947642344446 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" mod_revision:289 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" value_size:3977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813203929-13784\" > >>" with result "size:16" took too long (432.247936ms) to execute
	2021-08-13 20:40:18.660824 W | wal: sync duration of 1.725448216s, expected less than 1s
	2021-08-13 20:40:18.929173 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.00007561s) to execute
	WARNING: 2021/08/13 20:40:18 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-13 20:40:19.784013 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (3.144407226s) to execute
	2021-08-13 20:40:19.784107 W | etcdserver: request "header:<ID:8128006947642344449 > lease_revoke:<id:70cc7b413e210346>" with result "size:29" took too long (1.12306053s) to execute
	2021-08-13 20:40:19.784416 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (2.727675957s) to execute
	2021-08-13 20:40:19.784657 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (2.728835843s) to execute
	2021-08-13 20:40:19.790810 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210813203929-13784\" " with result "range_response_count:1 size:3976" took too long (848.789811ms) to execute
	2021-08-13 20:40:24.423766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:25.164612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:35.163828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:45.164164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:40:55.164376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:41:39.384796 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true " with result "range_response_count:0 size:5" took too long (106.40081ms) to execute
	2021-08-13 20:41:39.392055 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true " with result "range_response_count:0 size:5" took too long (113.28513ms) to execute
	2021-08-13 20:41:39.394712 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (113.466551ms) to execute
	2021-08-13 20:41:39.395210 W | etcdserver: read-only range request "key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.642459ms) to execute
	2021-08-13 20:41:39.395944 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.724887ms) to execute
	2021-08-13 20:41:39.396062 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.818866ms) to execute
	2021-08-13 20:41:39.396181 W | etcdserver: read-only range request "key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.857305ms) to execute
	2021-08-13 20:41:39.396324 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (114.898932ms) to execute
	2021-08-13 20:41:39.396475 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (114.941075ms) to execute
	
	* 
	* ==> kernel <==
	*  20:42:09 up  1:24,  0 users,  load average: 5.28, 3.72, 2.05
	Linux pause-20210813203929-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [0a93b9e0c15af17d0f65cc86b1767c8473d266b51c6a325affc5d3a921837f3f] <==
	* Trace[1263737565]: ---"About to write a response" 852ms (20:40:00.793)
	Trace[1263737565]: [852.227308ms] [852.227308ms] END
	I0813 20:40:23.498807       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:40:23.848939       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:40:39.165407       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:40:39.165451       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:40:39.165458       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:41:39.277412       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:41:39.277465       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:41:39.277475       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 20:41:39.278238       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0813 20:41:39.278305       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:41:39.278321       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:41:39.279137       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0813 20:41:39.279183       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:41:39.279488       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:41:39.279566       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0813 20:41:39.279600       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:41:39.285120       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:41:39.287182       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:41:39.291151       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:41:39.292938       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:41:40.026569       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:41:40.026653       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:41:40.028222       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	
	* 
	* ==> kube-controller-manager [de897ce9eab3c09cb40448f3323219968e3d38d53c9f878f49d11dcdd10cd637] <==
	* I0813 20:40:23.507939       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pjb6w"
	I0813 20:40:23.517415       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k8wlb"
	E0813 20:40:23.548254       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"1eae9bdb-1aea-4c39-a2f9-a9df683878b4", ResourceVersion:"265", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764484003, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00175a8d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00175a8e8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019c5800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a918), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00175a930), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5820)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019c5860)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00048fc20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001baacb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000c8a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000ae2ff0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001baad00)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 20:40:23.661112       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.661142       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:40:23.667920       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:40:23.851329       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:40:23.871778       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:40:23.965355       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ncl4r"
	I0813 20:40:23.980674       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ts9sl"
	I0813 20:40:24.022166       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-ncl4r"
	E0813 20:41:38.681456       1 node_lifecycle_controller.go:1107] Error updating node pause-20210813203929-13784: Put "https://192.168.49.2:8443/api/v1/nodes/pause-20210813203929-13784/status": http2: client connection lost
	W0813 20:41:38.681564       1 garbagecollector.go:705] failed to discover preferred resources: Get "https://192.168.49.2:8443/api?timeout=32s": http2: client connection lost
	E0813 20:41:38.681626       1 resource_quota_controller.go:409] failed to discover resources: Get "https://192.168.49.2:8443/api?timeout=32s": http2: client connection lost
	I0813 20:41:40.051611       1 event.go:291] "Event occurred" object="pause-20210813203929-13784" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node pause-20210813203929-13784 status is now: NodeNotReady"
	I0813 20:41:40.057616       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210813203929-13784" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.070599       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20210813203929-13784" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.082148       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210813203929-13784" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.091913       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210813203929-13784" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.114978       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-pjb6w" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.134563       1 event.go:291] "Event occurred" object="kube-system/kindnet-k8wlb" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.140428       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-ts9sl" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 20:41:40.171387       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0813 20:41:40.171445       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [15fb32d86d1588d35ac71a5a5f8f20dbd4299c6282734a05ba22aaa08271157e] <==
	* I0813 20:40:24.404639       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:40:24.404699       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:40:24.404733       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:40:24.484285       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:40:24.484325       1 server_others.go:212] Using iptables Proxier.
	I0813 20:40:24.484338       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:40:24.484352       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:40:24.484722       1 server.go:643] Version: v1.21.3
	I0813 20:40:24.485344       1 config.go:224] Starting endpoint slice config controller
	I0813 20:40:24.485368       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:40:24.485394       1 config.go:315] Starting service config controller
	I0813 20:40:24.485398       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 20:40:24.494950       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:40:24.496346       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:40:24.585594       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:40:24.585676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [ecf109c279e47c85057297039cb0d9fcd99ccaec33b090f02cddfa5bff876662] <==
	* E0813 20:40:00.668147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:00.668368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.668713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:40:00.669360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:40:00.669416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.669423       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:40:00.669448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:00.680678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:00.680738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:40:00.680761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:00.680779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:40:01.515088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:40:01.520824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:40:01.529811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:40:01.530681       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:40:01.534900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:40:01.603579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:40:01.619709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 20:40:02.065154       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0813 20:41:17.023801       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	I0813 20:41:28.832649       1 trace.go:205] Trace[1216246411]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (13-Aug-2021 20:41:18.831) (total time: 10001ms):
	Trace[1216246411]: [10.001010096s] [10.001010096s] END
	E0813 20:41:28.832675       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=33": net/http: TLS handshake timeout
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:39:32 UTC, end at Fri 2021-08-13 20:42:09 UTC. --
	Aug 13 20:41:40 pause-20210813203929-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899419    4773 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:false CgroupRoot: CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899461    4773 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899476    4773 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899484    4773 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899624    4773 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899656    4773 remote_runtime.go:62] parsed scheme: ""
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899665    4773 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899704    4773 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899714    4773 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899825    4773 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899843    4773 remote_image.go:50] parsed scheme: ""
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899850    4773 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899864    4773 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899870    4773 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899968    4773 kubelet.go:404] "Attempting to sync node with API server"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.899986    4773 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.900011    4773 kubelet.go:283] "Adding apiserver pod source"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.900026    4773 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 13 20:41:44 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:44.912102    4773 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="cri-o" version="1.20.3" apiVersion="v1alpha1"
	Aug 13 20:41:45 pause-20210813203929-13784 kubelet[4773]: E0813 20:41:45.139963    4773 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 13 20:41:45 pause-20210813203929-13784 kubelet[4773]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 13 20:41:45 pause-20210813203929-13784 kubelet[4773]: I0813 20:41:45.140576    4773 server.go:1190] "Started kubelet"
	Aug 13 20:41:45 pause-20210813203929-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:41:45 pause-20210813203929-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [8422317486aff09fb31160b2c5d5a3302f08b37c29c45ec15b3f56f38469e18b] <==
	* 
	goroutine 111 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc0004b4b90, 0x3)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0004b4b80)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00013a4e0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000440c80, 0x18e5530, 0xc000046100, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005e7200)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005e7200, 0x18b3d60, 0xc000272000, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005e7200, 0x3b9aca00, 0x0, 0x1, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0005e7200, 0x3b9aca00, 0xc00023a0c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:42:09.718348  204307 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/PauseAgain (30.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210813204214-13784 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210813204214-13784 --alsologtostderr -v=1: exit status 80 (1.589545142s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20210813204214-13784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:49:17.610538  252870 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:49:17.610663  252870 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:17.610675  252870 out.go:311] Setting ErrFile to fd 2...
	I0813 20:49:17.610679  252870 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:49:17.610804  252870 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:49:17.611008  252870 out.go:305] Setting JSON to false
	I0813 20:49:17.611030  252870 mustload.go:65] Loading cluster: old-k8s-version-20210813204214-13784
	I0813 20:49:17.611372  252870 config.go:177] Loaded profile config "old-k8s-version-20210813204214-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:49:17.612492  252870 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:49:17.653334  252870 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:49:17.654123  252870 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-20210813204214-13784 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:49:17.656692  252870 out.go:177] * Pausing node old-k8s-version-20210813204214-13784 ... 
	I0813 20:49:17.656720  252870 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:49:17.656985  252870 ssh_runner.go:149] Run: systemctl --version
	I0813 20:49:17.657028  252870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:49:17.697658  252870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:49:17.789392  252870 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:17.799603  252870 pause.go:50] kubelet running: true
	I0813 20:49:17.799661  252870 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:49:17.970201  252870 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0813 20:49:18.246653  252870 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:18.255885  252870 pause.go:50] kubelet running: true
	I0813 20:49:18.255965  252870 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:49:18.424739  252870 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0813 20:49:18.965434  252870 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:18.975488  252870 pause.go:50] kubelet running: true
	I0813 20:49:18.975553  252870 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:49:19.138294  252870 out.go:177] 
	W0813 20:49:19.138441  252870 out.go:242] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0813 20:49:19.138456  252870 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:49:19.141557  252870 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:49:19.143156  252870 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210813204214-13784 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210813204214-13784
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210813204214-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34",
	        "Created": "2021-08-13T20:45:33.476318759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 247951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:48:07.359292704Z",
	            "FinishedAt": "2021-08-13T20:48:05.641626429Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34/hostname",
	        "HostsPath": "/var/lib/docker/containers/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34/hosts",
	        "LogPath": "/var/lib/docker/containers/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34-json.log",
	        "Name": "/old-k8s-version-20210813204214-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210813204214-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210813204214-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af2e3db11e0ee3d4527a11456d24ce2827ae11213b9946f79ca95d7d7331a568-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af2e3db11e0ee3d4527a11456d24ce2827ae11213b9946f79ca95d7d7331a568/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af2e3db11e0ee3d4527a11456d24ce2827ae11213b9946f79ca95d7d7331a568/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af2e3db11e0ee3d4527a11456d24ce2827ae11213b9946f79ca95d7d7331a568/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210813204214-13784",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210813204214-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210813204214-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210813204214-13784",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210813204214-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "adf728383ecbb0b98196dcef87c68344d1c85cc657cbe0bc916f766d50414159",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/adf728383ecb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210813204214-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8207b4ce0a52"
	                    ],
	                    "NetworkID": "4f1a585227db4bb5503779d0d20a062df451f1e513337e542778d16c6563ea60",
	                    "EndpointID": "63ab1360b94579b088a51b505aaf473c522b67a25f59804cf07d8eb0aa26d826",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204214-13784 -n old-k8s-version-20210813204214-13784
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210813204214-13784 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p pause-20210813203929-13784                     | pause-20210813203929-13784                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:10 UTC | Fri, 13 Aug 2021 20:42:13 UTC |
	|         | --alsologtostderr -v=5                            |                                                 |         |         |                               |                               |
	| profile | list --output json                                | minikube                                        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:13 UTC | Fri, 13 Aug 2021 20:42:14 UTC |
	| delete  | -p pause-20210813203929-13784                     | pause-20210813203929-13784                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:14 UTC | Fri, 13 Aug 2021 20:42:14 UTC |
	| delete  | -p                                                | kubernetes-upgrade-20210813204027-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:13 UTC | Fri, 13 Aug 2021 20:42:16 UTC |
	|         | kubernetes-upgrade-20210813204027-13784           |                                                 |         |         |                               |                               |
	| delete  | -p                                                | stopped-upgrade-20210813204011-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:56 UTC | Fri, 13 Aug 2021 20:42:58 UTC |
	|         | stopped-upgrade-20210813204011-13784              |                                                 |         |         |                               |                               |
	| delete  | -p                                                | running-upgrade-20210813204143-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:04 UTC | Fri, 13 Aug 2021 20:44:07 UTC |
	|         | running-upgrade-20210813204143-13784              |                                                 |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210813204407-13784      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:07 UTC | Fri, 13 Aug 2021 20:44:07 UTC |
	|         | disable-driver-mounts-20210813204407-13784        |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:59 UTC | Fri, 13 Aug 2021 20:44:11 UTC |
	|         | embed-certs-20210813204258-13784                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=docker                                   |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:23 UTC | Fri, 13 Aug 2021 20:44:24 UTC |
	|         | embed-certs-20210813204258-13784                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:24 UTC | Fri, 13 Aug 2021 20:44:47 UTC |
	|         | embed-certs-20210813204258-13784                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:44:47 UTC |
	|         | embed-certs-20210813204258-13784                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:17 UTC | Fri, 13 Aug 2021 20:44:49 UTC |
	|         | no-preload-20210813204216-13784                   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |         |         |                               |                               |
	|         | --driver=docker                                   |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:01 UTC | Fri, 13 Aug 2021 20:45:02 UTC |
	|         | no-preload-20210813204216-13784                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:07 UTC | Fri, 13 Aug 2021 20:45:15 UTC |
	|         | default-k8s-different-port-20210813204407-13784   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:02 UTC | Fri, 13 Aug 2021 20:45:23 UTC |
	|         | no-preload-20210813204216-13784                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:45:23 UTC |
	|         | no-preload-20210813204216-13784                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:27 UTC | Fri, 13 Aug 2021 20:45:28 UTC |
	|         | default-k8s-different-port-20210813204407-13784   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:28 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:15 UTC | Fri, 13 Aug 2021 20:47:32 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:44 UTC | Fri, 13 Aug 2021 20:47:45 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:48:06
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:48:06.280255  247624 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:48:06.280377  247624 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:48:06.280391  247624 out.go:311] Setting ErrFile to fd 2...
	I0813 20:48:06.280396  247624 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:48:06.280547  247624 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:48:06.280903  247624 out.go:305] Setting JSON to false
	I0813 20:48:06.324507  247624 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5449,"bootTime":1628882237,"procs":354,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:48:06.324598  247624 start.go:121] virtualization: kvm guest
	I0813 20:48:06.329281  247624 out.go:177] * [old-k8s-version-20210813204214-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:48:06.330841  247624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:48:06.329460  247624 notify.go:169] Checking for updates...
	I0813 20:48:06.332337  247624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:48:06.333870  247624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:48:06.335251  247624 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:48:06.335679  247624 config.go:177] Loaded profile config "old-k8s-version-20210813204214-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:48:06.337625  247624 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 20:48:06.337670  247624 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:48:06.387537  247624 docker.go:132] docker version: linux-19.03.15
	I0813 20:48:06.387650  247624 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:48:06.473267  247624 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:48:06.424978688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:48:06.473385  247624 docker.go:244] overlay module found
	I0813 20:48:06.475570  247624 out.go:177] * Using the docker driver based on existing profile
	I0813 20:48:06.475598  247624 start.go:278] selected driver: docker
	I0813 20:48:06.475604  247624 start.go:751] validating driver "docker" against &{Name:old-k8s-version-20210813204214-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813204214-13784 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:48:06.475701  247624 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:48:06.475739  247624 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:48:06.475765  247624 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:48:06.477041  247624 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:48:06.477970  247624 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:48:06.559368  247624 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:48:06.515538545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:48:06.559486  247624 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:48:06.559514  247624 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:48:06.561467  247624 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:48:06.561591  247624 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:48:06.561621  247624 cni.go:93] Creating CNI manager for ""
	I0813 20:48:06.561639  247624 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:48:06.561654  247624 start_flags.go:277] config:
	{Name:old-k8s-version-20210813204214-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813204214-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:48:06.563344  247624 out.go:177] * Starting control plane node old-k8s-version-20210813204214-13784 in cluster old-k8s-version-20210813204214-13784
	I0813 20:48:06.563427  247624 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:48:06.564909  247624 out.go:177] * Pulling base image ...
	I0813 20:48:06.564951  247624 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:48:06.564983  247624 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:48:06.564995  247624 cache.go:56] Caching tarball of preloaded images
	I0813 20:48:06.565057  247624 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:48:06.565166  247624 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:48:06.565192  247624 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on crio
	I0813 20:48:06.565349  247624 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/config.json ...
	I0813 20:48:06.652798  247624 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:48:06.652823  247624 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:48:06.652837  247624 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:48:06.652889  247624 start.go:313] acquiring machines lock for old-k8s-version-20210813204214-13784: {Name:mk76ee894658213dd67f3cb3bd3522bcd5d4bdbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:48:06.653014  247624 start.go:317] acquired machines lock for "old-k8s-version-20210813204214-13784" in 76.644µs
	I0813 20:48:06.653036  247624 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:48:06.653044  247624 fix.go:55] fixHost starting: 
	I0813 20:48:06.653325  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:06.693403  247624 fix.go:108] recreateIfNeeded on old-k8s-version-20210813204214-13784: state=Stopped err=<nil>
	W0813 20:48:06.693436  247624 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:48:04.229060  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.229395  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:04.283195  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.284040  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.381359  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:08.382743  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.696001  247624 out.go:177] * Restarting existing docker container for "old-k8s-version-20210813204214-13784" ...
	I0813 20:48:06.696064  247624 cli_runner.go:115] Run: docker start old-k8s-version-20210813204214-13784
	I0813 20:48:07.366459  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:07.408365  247624 kic.go:420] container "old-k8s-version-20210813204214-13784" state is running.
	I0813 20:48:07.408749  247624 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210813204214-13784
	I0813 20:48:07.450987  247624 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/config.json ...
	I0813 20:48:07.451186  247624 machine.go:88] provisioning docker machine ...
	I0813 20:48:07.451207  247624 ubuntu.go:169] provisioning hostname "old-k8s-version-20210813204214-13784"
	I0813 20:48:07.451249  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:07.495106  247624 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:07.495305  247624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32960 <nil> <nil>}
	I0813 20:48:07.495329  247624 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210813204214-13784 && echo "old-k8s-version-20210813204214-13784" | sudo tee /etc/hostname
	I0813 20:48:07.495912  247624 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60726->127.0.0.1:32960: read: connection reset by peer
	I0813 20:48:10.649554  247624 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210813204214-13784
	
	I0813 20:48:10.649645  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:10.689131  247624 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:10.689339  247624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32960 <nil> <nil>}
	I0813 20:48:10.689364  247624 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210813204214-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210813204214-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210813204214-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:48:10.813413  247624 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:48:10.813438  247624 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:48:10.813464  247624 ubuntu.go:177] setting up certificates
	I0813 20:48:10.813476  247624 provision.go:83] configureAuth start
	I0813 20:48:10.813565  247624 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210813204214-13784
	I0813 20:48:10.853956  247624 provision.go:138] copyHostCerts
	I0813 20:48:10.854037  247624 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:48:10.854051  247624 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:48:10.854111  247624 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:48:10.854193  247624 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:48:10.854202  247624 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:48:10.854223  247624 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:48:10.854287  247624 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:48:10.854294  247624 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:48:10.854313  247624 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:48:10.854364  247624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210813204214-13784 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210813204214-13784]
	I0813 20:48:11.031308  247624 provision.go:172] copyRemoteCerts
	I0813 20:48:11.031365  247624 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:48:11.031400  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:11.073359  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:11.165003  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:48:11.181601  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0813 20:48:11.197424  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:48:11.213674  247624 provision.go:86] duration metric: configureAuth took 400.183851ms
	I0813 20:48:11.213702  247624 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:48:11.213871  247624 config.go:177] Loaded profile config "old-k8s-version-20210813204214-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:48:11.214018  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:08.729796  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.731255  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:11.256638  247624 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:11.256805  247624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32960 <nil> <nil>}
	I0813 20:48:11.256827  247624 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:48:11.749584  247624 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:48:11.749613  247624 machine.go:91] provisioned docker machine in 4.298413293s
	I0813 20:48:11.749626  247624 start.go:267] post-start starting for "old-k8s-version-20210813204214-13784" (driver="docker")
	I0813 20:48:11.749634  247624 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:48:11.749700  247624 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:48:11.749746  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:11.790794  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:11.884942  247624 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:48:11.887599  247624 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:48:11.887620  247624 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:48:11.887628  247624 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:48:11.887634  247624 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:48:11.887643  247624 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:48:11.887703  247624 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:48:11.887802  247624 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:48:11.887909  247624 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:48:11.894326  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:48:11.910279  247624 start.go:270] post-start completed in 160.638884ms
	I0813 20:48:11.910343  247624 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:48:11.910381  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:11.949868  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:12.037777  247624 fix.go:57] fixHost completed within 5.384720712s
	I0813 20:48:12.037815  247624 start.go:80] releasing machines lock for "old-k8s-version-20210813204214-13784", held for 5.384788215s
	I0813 20:48:12.037932  247624 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210813204214-13784
	I0813 20:48:12.077875  247624 ssh_runner.go:149] Run: systemctl --version
	I0813 20:48:12.077930  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:12.077935  247624 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:48:12.077992  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:12.118315  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:12.118600  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:12.213911  247624 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:48:12.359684  247624 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:48:12.368755  247624 docker.go:153] disabling docker service ...
	I0813 20:48:12.368806  247624 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:48:12.460980  247624 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:48:12.471785  247624 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:48:12.534686  247624 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:48:12.601869  247624 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:48:12.610851  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:48:12.623546  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0813 20:48:12.630923  247624 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:48:12.630965  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:48:12.638313  247624 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:48:12.644233  247624 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:48:12.644289  247624 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:48:12.650976  247624 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:48:12.656746  247624 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:48:12.714912  247624 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:48:12.724053  247624 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:48:12.724132  247624 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:48:12.727774  247624 start.go:413] Will wait 60s for crictl version
	I0813 20:48:12.727837  247624 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:48:12.758796  247624 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:48:12.758879  247624 ssh_runner.go:149] Run: crio --version
	I0813 20:48:12.823180  247624 ssh_runner.go:149] Run: crio --version
	I0813 20:48:08.284130  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.284262  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:12.783617  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.881189  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:13.381834  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:12.884580  247624 out.go:177] * Preparing Kubernetes v1.14.0 on CRI-O 1.20.3 ...
	I0813 20:48:12.884658  247624 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210813204214-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:48:12.923787  247624 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0813 20:48:12.927116  247624 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:48:12.936132  247624 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:48:12.936188  247624 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:48:12.963472  247624 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:48:12.963494  247624 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:48:12.963547  247624 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:48:12.985284  247624 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:48:12.985312  247624 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:48:12.985375  247624 ssh_runner.go:149] Run: crio config
	I0813 20:48:13.051955  247624 cni.go:93] Creating CNI manager for ""
	I0813 20:48:13.051982  247624 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:48:13.051994  247624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:48:13.052007  247624 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210813204214-13784 NodeName:old-k8s-version-20210813204214-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Client
CAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:48:13.052127  247624 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-20210813204214-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210813204214-13784
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:48:13.052219  247624 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-20210813204214-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813204214-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:48:13.052265  247624 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0813 20:48:13.059197  247624 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:48:13.059263  247624 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:48:13.065749  247624 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (639 bytes)
	I0813 20:48:13.077293  247624 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:48:13.089126  247624 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0813 20:48:13.100571  247624 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:48:13.103214  247624 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:48:13.111529  247624 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784 for IP: 192.168.76.2
	I0813 20:48:13.111578  247624 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:48:13.111597  247624 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:48:13.111655  247624 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.key
	I0813 20:48:13.111679  247624 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/apiserver.key.31bdca25
	I0813 20:48:13.111701  247624 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/proxy-client.key
	I0813 20:48:13.111807  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:48:13.111867  247624 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:48:13.111882  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:48:13.111916  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:48:13.111956  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:48:13.111994  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:48:13.112055  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:48:13.112951  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:48:13.129200  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:48:13.144609  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:48:13.160140  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:48:13.175551  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:48:13.190989  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:48:13.206257  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:48:13.221623  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:48:13.237944  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:48:13.253436  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:48:13.269144  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:48:13.284926  247624 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:48:13.296120  247624 ssh_runner.go:149] Run: openssl version
	I0813 20:48:13.300736  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:48:13.307616  247624 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:13.310435  247624 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:13.310488  247624 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:13.314856  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:48:13.320921  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:48:13.327644  247624 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:48:13.330480  247624 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:48:13.330512  247624 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:48:13.334846  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:48:13.340722  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:48:13.347333  247624 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:48:13.350159  247624 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:48:13.350196  247624 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:48:13.354589  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:48:13.360555  247624 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210813204214-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813204214-13784 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReque
sted:false ExtraDisks:0}
	I0813 20:48:13.360654  247624 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:48:13.360686  247624 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:48:13.383613  247624 cri.go:76] found id: ""
	I0813 20:48:13.383668  247624 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:48:13.390113  247624 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:48:13.390135  247624 kubeadm.go:600] restartCluster start
	I0813 20:48:13.390177  247624 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:48:13.395888  247624 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:13.396973  247624 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210813204214-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:48:13.397504  247624 kubeconfig.go:128] "old-k8s-version-20210813204214-13784" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:48:13.398463  247624 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:48:13.401242  247624 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:48:13.407099  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:13.407138  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:13.419218  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:13.619588  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:13.619668  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:13.633308  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:13.819444  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:13.819506  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:13.832736  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.020018  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.020099  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.033344  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.219563  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.219649  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.233533  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.419710  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.419779  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.433118  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.619355  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.619423  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.633111  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.820251  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.820335  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.833893  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.020171  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.020241  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.033320  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.219597  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.219668  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.233368  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.419768  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.419854  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.433098  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.619344  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.619423  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.632845  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.820119  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.820189  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.833782  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.019995  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:16.020094  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:16.033862  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.220086  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:16.220182  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:13.230383  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:15.729280  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:15.284095  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:17.782645  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:15.881647  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:18.382914  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	W0813 20:48:16.234244  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.419445  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:16.419510  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:16.432969  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.432993  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:16.433032  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:16.444939  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.444963  247624 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:48:16.444970  247624 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:48:16.444981  247624 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:48:16.445019  247624 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:48:16.479950  247624 cri.go:76] found id: ""
	I0813 20:48:16.480015  247624 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:48:16.489159  247624 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:48:16.495994  247624 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5751 Aug 13 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Aug 13 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Aug 13 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Aug 13 20:45 /etc/kubernetes/scheduler.conf
	
	I0813 20:48:16.496048  247624 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:48:16.502354  247624 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:48:16.508405  247624 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:48:16.514950  247624 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:48:16.521346  247624 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:48:16.527636  247624 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:48:16.527656  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:16.674292  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:17.134744  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:17.245254  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:17.288162  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:17.360004  247624 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:48:17.360112  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:17.875270  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:18.376073  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:18.463698  247624 api_server.go:70] duration metric: took 1.103695361s to wait for apiserver process to appear ...
	I0813 20:48:18.463729  247624 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:48:18.463741  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:18.464169  247624 api_server.go:255] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0813 20:48:18.964871  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:17.729744  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:20.229284  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.230326  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:19.783623  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.283929  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.853517  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:48:22.853551  247624 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:48:22.964759  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:22.968920  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 20:48:22.968942  247624 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 20:48:23.465260  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:23.471242  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 20:48:23.471268  247624 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 20:48:23.964536  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:23.970198  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 20:48:23.970229  247624 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 20:48:24.464727  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:24.469935  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:48:24.479588  247624 api_server.go:139] control plane version: v1.14.0
	I0813 20:48:24.479657  247624 api_server.go:129] duration metric: took 6.01591953s to wait for apiserver health ...
	I0813 20:48:24.479681  247624 cni.go:93] Creating CNI manager for ""
	I0813 20:48:24.479690  247624 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:48:20.881226  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.882969  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:24.482948  247624 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:48:24.483018  247624 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:48:24.486528  247624 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0813 20:48:24.486549  247624 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:48:24.498573  247624 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:48:24.763615  247624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:48:24.773742  247624 system_pods.go:59] 8 kube-system pods found
	I0813 20:48:24.773773  247624 system_pods.go:61] "coredns-fb8b8dccf-556lc" [814e698b-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773781  247624 system_pods.go:61] "etcd-old-k8s-version-20210813204214-13784" [a49dccef-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773786  247624 system_pods.go:61] "kindnet-gwddx" [8143a261-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773792  247624 system_pods.go:61] "kube-apiserver-old-k8s-version-20210813204214-13784" [a5cf018d-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773798  247624 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210813204214-13784" [ccd414a5-fc77-11eb-b136-02429fe89262] Pending
	I0813 20:48:24.773811  247624 system_pods.go:61] "kube-proxy-97hnd" [8143a1df-fc77-11eb-8f20-0242bfc25c59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 20:48:24.773825  247624 system_pods.go:61] "kube-scheduler-old-k8s-version-20210813204214-13784" [a1a2d87c-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773833  247624 system_pods.go:61] "storage-provisioner" [8215e12f-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773841  247624 system_pods.go:74] duration metric: took 10.205938ms to wait for pod list to return data ...
	I0813 20:48:24.773853  247624 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:48:24.776949  247624 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:48:24.776970  247624 node_conditions.go:123] node cpu capacity is 8
	I0813 20:48:24.776983  247624 node_conditions.go:105] duration metric: took 3.122261ms to run NodePressure ...
	I0813 20:48:24.777002  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:24.962972  247624 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:48:24.966701  247624 kubeadm.go:746] kubelet initialised
	I0813 20:48:24.966722  247624 kubeadm.go:747] duration metric: took 3.722351ms waiting for restarted kubelet to initialise ...
	I0813 20:48:24.966732  247624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:48:24.970304  247624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.981737  247624 pod_ready.go:92] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:24.981757  247624 pod_ready.go:81] duration metric: took 11.428229ms waiting for pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.981769  247624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.986014  247624 pod_ready.go:92] pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:24.986035  247624 pod_ready.go:81] duration metric: took 4.258023ms waiting for pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.986052  247624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.989710  247624 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:24.989728  247624 pod_ready.go:81] duration metric: took 3.666526ms waiting for pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.989739  247624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:26.172506  247624 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:26.172534  247624 pod_ready.go:81] duration metric: took 1.18278695s waiting for pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:26.172546  247624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.729443  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:27.229551  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:24.285035  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:26.783455  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:25.381170  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:27.381282  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:28.372240  247624 pod_ready.go:102] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:29.372288  247624 pod_ready.go:92] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:29.372315  247624 pod_ready.go:81] duration metric: took 3.199762729s waiting for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:29.372326  247624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:29.376413  247624 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:29.376432  247624 pod_ready.go:81] duration metric: took 4.098487ms waiting for pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:29.376443  247624 pod_ready.go:38] duration metric: took 4.409697623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:48:29.376462  247624 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:48:29.397564  247624 ops.go:34] apiserver oom_adj: 16
	I0813 20:48:29.397584  247624 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 20:48:29.397597  247624 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:48:29.420965  247624 kubeadm.go:604] restartCluster took 16.030818309s
	I0813 20:48:29.420986  247624 kubeadm.go:392] StartCluster complete in 16.060436511s
	I0813 20:48:29.421008  247624 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:48:29.421104  247624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:48:29.422712  247624 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:48:29.933985  247624 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813204214-13784" rescaled to 1
	I0813 20:48:29.934045  247624 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 20:48:29.936012  247624 out.go:177] * Verifying Kubernetes components...
	I0813 20:48:29.936079  247624 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:48:29.934095  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:48:29.934134  247624 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:48:29.934301  247624 config.go:177] Loaded profile config "old-k8s-version-20210813204214-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:48:29.936210  247624 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936228  247624 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936235  247624 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936239  247624 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210813204214-13784"
	W0813 20:48:29.936244  247624 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:48:29.936245  247624 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936251  247624 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210813204214-13784"
	W0813 20:48:29.936255  247624 addons.go:147] addon dashboard should already be in state true
	W0813 20:48:29.936257  247624 addons.go:147] addon metrics-server should already be in state true
	I0813 20:48:29.936287  247624 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:48:29.936288  247624 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:48:29.936215  247624 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936486  247624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936292  247624 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:48:29.936783  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:29.936816  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:29.936882  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:29.937141  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:30.001232  247624 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:48:30.002868  247624 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:48:30.002961  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:48:30.002973  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:48:30.003029  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:30.009789  247624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:48:30.008415  247624 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813204214-13784"
	W0813 20:48:30.009881  247624 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:48:30.009906  247624 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:48:30.009909  247624 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:48:30.009919  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:48:30.009967  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:30.010412  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:30.017865  247624 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:48:30.017945  247624 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:48:30.017961  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:48:30.018040  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:30.033797  247624 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813204214-13784" to be "Ready" ...
	I0813 20:48:30.033952  247624 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:48:30.036376  247624 node_ready.go:49] node "old-k8s-version-20210813204214-13784" has status "Ready":"True"
	I0813 20:48:30.036394  247624 node_ready.go:38] duration metric: took 2.565834ms waiting for node "old-k8s-version-20210813204214-13784" to be "Ready" ...
	I0813 20:48:30.036405  247624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:48:30.042433  247624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:30.061111  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:30.063475  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:30.071874  247624 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:48:30.071911  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:48:30.071963  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:30.079028  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:30.114512  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:30.158647  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:48:30.158669  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:48:30.163108  247624 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:48:30.171156  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:48:30.171179  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:48:30.175077  247624 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:48:30.175097  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:48:30.183604  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:48:30.183626  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:48:30.187978  247624 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:48:30.187995  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:48:30.196520  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:48:30.196545  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:48:30.201241  247624 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:48:30.201261  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:48:30.210162  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:48:30.210182  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:48:30.258135  247624 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:48:30.259266  247624 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:48:30.271151  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:48:30.271177  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:48:30.285366  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:48:30.285392  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:48:30.358111  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:48:30.358139  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:48:30.373229  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:48:30.373255  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:48:30.387993  247624 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:48:30.716958  247624 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210813204214-13784"
	I0813 20:48:30.836432  247624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 20:48:30.836461  247624 addons.go:344] enableAddons completed in 902.350782ms
	I0813 20:48:29.230090  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:31.728672  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:29.283870  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:31.783610  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:29.881087  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:32.381194  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:34.381388  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:32.052560  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:34.052825  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:36.053243  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:33.729991  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:36.229802  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:34.283551  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:36.783755  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:36.880556  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.881294  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.053829  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:40.055059  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.230069  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:40.728676  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:39.283302  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:41.284504  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:41.382701  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:43.881109  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.553186  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.561372  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.729355  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.730264  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:47.230313  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:43.783584  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:45.783881  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:45.882125  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:48.381404  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:46.563785  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.053875  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.729434  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:52.229100  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:48.284093  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:50.782799  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:52.782850  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:50.881295  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:52.881561  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:51.557330  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.052925  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.053467  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.230528  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.729710  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.783262  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.783300  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:55.381418  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:57.880755  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.554094  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:01.053879  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.729881  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:00.732184  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:59.283090  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:01.283459  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:59.881433  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:02.380600  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:04.381265  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.554673  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.553647  247624 pod_ready.go:92] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.553675  247624 pod_ready.go:81] duration metric: took 35.511214581s waiting for pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.553689  247624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.557689  247624 pod_ready.go:92] pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.557707  247624 pod_ready.go:81] duration metric: took 4.010218ms waiting for pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.557718  247624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.561155  247624 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.561171  247624 pod_ready.go:81] duration metric: took 3.444956ms waiting for pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.561180  247624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.564821  247624 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.564835  247624 pod_ready.go:81] duration metric: took 3.649416ms waiting for pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.564844  247624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.568289  247624 pod_ready.go:92] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.568306  247624 pod_ready.go:81] duration metric: took 3.456412ms waiting for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.568314  247624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.951947  247624 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.951969  247624 pod_ready.go:81] duration metric: took 383.647763ms waiting for pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.951980  247624 pod_ready.go:38] duration metric: took 35.915563837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:05.951999  247624 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:49:05.952039  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:05.977021  247624 api_server.go:70] duration metric: took 36.042945555s to wait for apiserver process to appear ...
	I0813 20:49:05.977043  247624 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:49:05.977053  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:49:05.982278  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:49:05.982985  247624 api_server.go:139] control plane version: v1.14.0
	I0813 20:49:05.983029  247624 api_server.go:129] duration metric: took 5.980504ms to wait for apiserver health ...
	I0813 20:49:05.983038  247624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:49:06.153346  247624 system_pods.go:59] 9 kube-system pods found
	I0813 20:49:06.153374  247624 system_pods.go:61] "coredns-fb8b8dccf-556lc" [814e698b-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153379  247624 system_pods.go:61] "etcd-old-k8s-version-20210813204214-13784" [a49dccef-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153383  247624 system_pods.go:61] "kindnet-gwddx" [8143a261-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153394  247624 system_pods.go:61] "kube-apiserver-old-k8s-version-20210813204214-13784" [a5cf018d-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153398  247624 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210813204214-13784" [ccd414a5-fc77-11eb-b136-02429fe89262] Running
	I0813 20:49:06.153401  247624 system_pods.go:61] "kube-proxy-97hnd" [8143a1df-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153406  247624 system_pods.go:61] "kube-scheduler-old-k8s-version-20210813204214-13784" [a1a2d87c-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153413  247624 system_pods.go:61] "metrics-server-8546d8b77b-dprvt" [d74eab74-fc77-11eb-b136-02429fe89262] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:49:06.153419  247624 system_pods.go:61] "storage-provisioner" [8215e12f-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153425  247624 system_pods.go:74] duration metric: took 170.381598ms to wait for pod list to return data ...
	I0813 20:49:06.153437  247624 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:49:06.351595  247624 default_sa.go:45] found service account: "default"
	I0813 20:49:06.351618  247624 default_sa.go:55] duration metric: took 198.175573ms for default service account to be created ...
	I0813 20:49:06.351626  247624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:49:06.554202  247624 system_pods.go:86] 9 kube-system pods found
	I0813 20:49:06.554236  247624 system_pods.go:89] "coredns-fb8b8dccf-556lc" [814e698b-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554245  247624 system_pods.go:89] "etcd-old-k8s-version-20210813204214-13784" [a49dccef-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554251  247624 system_pods.go:89] "kindnet-gwddx" [8143a261-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554260  247624 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813204214-13784" [a5cf018d-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554268  247624 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813204214-13784" [ccd414a5-fc77-11eb-b136-02429fe89262] Running
	I0813 20:49:06.554280  247624 system_pods.go:89] "kube-proxy-97hnd" [8143a1df-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554287  247624 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813204214-13784" [a1a2d87c-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554298  247624 system_pods.go:89] "metrics-server-8546d8b77b-dprvt" [d74eab74-fc77-11eb-b136-02429fe89262] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:49:06.554307  247624 system_pods.go:89] "storage-provisioner" [8215e12f-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554319  247624 system_pods.go:126] duration metric: took 202.688336ms to wait for k8s-apps to be running ...
	I0813 20:49:06.554335  247624 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:49:06.554394  247624 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:06.564011  247624 system_svc.go:56] duration metric: took 9.671803ms WaitForService to wait for kubelet.
	I0813 20:49:06.564037  247624 kubeadm.go:547] duration metric: took 36.629963456s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:49:06.564064  247624 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:49:06.751768  247624 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:49:06.751791  247624 node_conditions.go:123] node cpu capacity is 8
	I0813 20:49:06.751806  247624 node_conditions.go:105] duration metric: took 187.737131ms to run NodePressure ...
	I0813 20:49:06.751818  247624 start.go:231] waiting for startup goroutines ...
	I0813 20:49:06.801010  247624 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0813 20:49:06.803519  247624 out.go:177] 
	W0813 20:49:06.803779  247624 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0813 20:49:06.805247  247624 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:49:06.806842  247624 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210813204214-13784" cluster and "default" namespace by default
	I0813 20:49:03.229347  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.229979  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.783900  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:06.283113  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:06.881861  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:09.380594  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:07.729750  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:10.229721  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:08.283866  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:10.782970  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.783580  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:11.880660  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:14.380529  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.730091  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:15.228721  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.230083  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:15.283194  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.320017  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:16.380711  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:18.381817  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:48:07 UTC, end at Fri 2021-08-13 20:49:19 UTC. --
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.501209391Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,RepoTags:[k8s.gcr.io/coredns:1.3.1],RepoDigests:[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns@sha256:638adb0319813f2479ba3642bbe37136db8cf363b48fb3eb7dc8db634d8d5a5b],Size_:40535007,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f44c6b69-84d3-4d58-bb06-d1c062830026 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.501904241Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.3.1" id=2e89ffb6-1e40-43cd-a8ad-92b784aecfe1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.502569424Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,RepoTags:[k8s.gcr.io/coredns:1.3.1],RepoDigests:[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns@sha256:638adb0319813f2479ba3642bbe37136db8cf363b48fb3eb7dc8db634d8d5a5b],Size_:40535007,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2e89ffb6-1e40-43cd-a8ad-92b784aecfe1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.503314662Z" level=info msg="Creating container: kube-system/coredns-fb8b8dccf-556lc/coredns" id=87fd1137-37e6-4d18-a8a5-f1c26d8f3d57 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.515434297Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/75658946d2e527aa23f1ae73e7425bd91739bef836d70fcd13a8b707656a6232/merged/etc/passwd: no such file or directory"
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.515465237Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/75658946d2e527aa23f1ae73e7425bd91739bef836d70fcd13a8b707656a6232/merged/etc/group: no such file or directory"
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.685047580Z" level=info msg="Created container 67b0bb98fb4778bb265f7c6a281ed4685496b7db464383984a767e2e8fb2e89f: kube-system/coredns-fb8b8dccf-556lc/coredns" id=87fd1137-37e6-4d18-a8a5-f1c26d8f3d57 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.685635840Z" level=info msg="Starting container: 67b0bb98fb4778bb265f7c6a281ed4685496b7db464383984a767e2e8fb2e89f" id=d5d4b327-1f13-4bce-a31f-e5b7f3f7332b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.696805052Z" level=info msg="Started container 67b0bb98fb4778bb265f7c6a281ed4685496b7db464383984a767e2e8fb2e89f: kube-system/coredns-fb8b8dccf-556lc/coredns" id=d5d4b327-1f13-4bce-a31f-e5b7f3f7332b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:56.370149746Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=82412610-204f-4f0c-a6ae-9e41379e8d73 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:56.370425670Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=82412610-204f-4f0c-a6ae-9e41379e8d73 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:56.371013049Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=a5fdf84d-3c2b-4bf9-988a-6ed018fcc108 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:56.377784613Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.370285225Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=87efa43b-b3ed-4f98-9832-91336135b201 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.371879467Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=87efa43b-b3ed-4f98-9832-91336135b201 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.372422210Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=57ec3908-8d4b-49cb-aad1-7c142a045aea name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.373772812Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=57ec3908-8d4b-49cb-aad1-7c142a045aea name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.374378970Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-2jx9q/dashboard-metrics-scraper" id=a8827247-fea8-410e-bf53-369cb0f03972 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.530447633Z" level=info msg="Created container c44b552369117529c015981632f911c5880cc514b50a6391accd679475153c18: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-2jx9q/dashboard-metrics-scraper" id=a8827247-fea8-410e-bf53-369cb0f03972 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.530901429Z" level=info msg="Starting container: c44b552369117529c015981632f911c5880cc514b50a6391accd679475153c18" id=76949d97-de3a-4a84-81de-02a46188c3d0 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.560450918Z" level=info msg="Started container c44b552369117529c015981632f911c5880cc514b50a6391accd679475153c18: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-2jx9q/dashboard-metrics-scraper" id=76949d97-de3a-4a84-81de-02a46188c3d0 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:49:08 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:08.522455783Z" level=info msg="Removing container: 21c7f944955ffd23ab4e4a0f45e83016beccea4d618f23bb6ad5eb63d884de4e" id=a4230441-ae99-4593-a72b-eb2e0573df44 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:49:08 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:08.558211054Z" level=info msg="Removed container 21c7f944955ffd23ab4e4a0f45e83016beccea4d618f23bb6ad5eb63d884de4e: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-2jx9q/dashboard-metrics-scraper" id=a4230441-ae99-4593-a72b-eb2e0573df44 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:49:09 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:09.370089831Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=d4206612-cb91-41f7-b430-d92fd34840c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:09 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:09.370337591Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=d4206612-cb91-41f7-b430-d92fd34840c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID
	c44b552369117       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   12 seconds ago       Exited              dashboard-metrics-scraper   2                   8c495a1818306
	67b0bb98fb477       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   25 seconds ago       Running             coredns                     1                   6dc7107cb9773
	876a6d6b9a9eb       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   38 seconds ago       Running             kubernetes-dashboard        0                   6e7c77f8b6aa0
	1520fea7b1a89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   42 seconds ago       Running             storage-provisioner         2                   cfbe586c72f2b
	05c24bd3cd45c       5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d   55 seconds ago       Running             kube-proxy                  0                   7574f10e78dc5
	db200865c8d03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   55 seconds ago       Exited              storage-provisioner         1                   cfbe586c72f2b
	ca6d391b3893d       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   55 seconds ago       Exited              coredns                     0                   6dc7107cb9773
	a3debb5e67085       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   55 seconds ago       Running             busybox                     0                   9e8de2b4b107e
	cc1c4fc69c79b       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   56 seconds ago       Running             kindnet-cni                 0                   64c4bc16ab677
	eb969b98501aa       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d   About a minute ago   Running             etcd                        0                   9bbe35ae5a5c4
	05205c8228507       ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6   About a minute ago   Running             kube-apiserver              0                   865f284b87841
	f0e9f48420d19       b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150   About a minute ago   Running             kube-controller-manager     0                   4c182d158cb0b
	c9eaddd82f247       00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492   About a minute ago   Running             kube-scheduler              0                   99f3465d1e237
	
	* 
	* ==> coredns [67b0bb98fb4778bb265f7c6a281ed4685496b7db464383984a767e2e8fb2e89f] <==
	* .:53
	2021-08-13T20:48:54.816Z [INFO] CoreDNS-1.3.1
	2021-08-13T20:48:54.816Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T20:48:54.816Z [INFO] plugin/reload: Running configuration MD5 = 08b08abdbcbace0be59f7a292f5ad181
	
	* 
	* ==> coredns [ca6d391b3893d9aa6fc3eefb859f5bac4e008775a6fd15c2024f7f88b33609ff] <==
	* .:53
	2021-08-13T20:46:42.268Z [INFO] CoreDNS-1.3.1
	2021-08-13T20:46:42.268Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T20:46:42.268Z [INFO] plugin/reload: Running configuration MD5 = 08b08abdbcbace0be59f7a292f5ad181
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2021-08-13T20:48:29.267Z [INFO] CoreDNS-1.3.1
	2021-08-13T20:48:29.267Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T20:48:29.268Z [INFO] plugin/reload: Running configuration MD5 = 08b08abdbcbace0be59f7a292f5ad181
	E0813 20:48:54.268215       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:48:54.268215       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-556lc.unknownuser.log.ERROR.20210813-204854.1: no such file or directory
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813204214-13784
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210813204214-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=old-k8s-version-20210813204214-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_45_58_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:45:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:48:23 +0000   Fri, 13 Aug 2021 20:45:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:48:23 +0000   Fri, 13 Aug 2021 20:45:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:48:23 +0000   Fri, 13 Aug 2021 20:45:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:48:23 +0000   Fri, 13 Aug 2021 20:45:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20210813204214-13784
	Capacity:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	System Info:
	 Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	 System UUID:                958edac4-ce8f-4ebc-810e-7874212ae9be
	 Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	 Kernel Version:             4.9.0-16-amd64
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.20.3
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (12 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                coredns-fb8b8dccf-556lc                                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m4s
	  kube-system                etcd-old-k8s-version-20210813204214-13784                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                kindnet-gwddx                                                   100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m4s
	  kube-system                kube-apiserver-old-k8s-version-20210813204214-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                kube-controller-manager-old-k8s-version-20210813204214-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                kube-proxy-97hnd                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                kube-scheduler-old-k8s-version-20210813204214-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                metrics-server-8546d8b77b-dprvt                                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         40s
	  kube-system                storage-provisioner                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-2jx9q                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-2jdmv                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                              Message
	  ----    ------                   ----                   ----                                              -------
	  Normal  NodeHasSufficientMemory  3m34s (x8 over 3m35s)  kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m34s (x8 over 3m35s)  kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m34s (x8 over 3m35s)  kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m2s                   kube-proxy, old-k8s-version-20210813204214-13784  Starting kube-proxy.
	  Normal  Starting                 63s                    kubelet, old-k8s-version-20210813204214-13784     Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 55s                    kube-proxy, old-k8s-version-20210813204214-13784  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000005] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000001] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +2.011829] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000002] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000011] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000002] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +4.159725] net_ratelimit: 1 callbacks suppressed
	[  +0.000002] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000003] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000004] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000003] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000004] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000001] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +8.191387] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000002] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000003] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000001] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000004] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000001] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +1.556850] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethbb85f246
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 9e 18 ea a0 26 43 08 06        ..........&C..
	[  +0.083664] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth492f01f6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 6e 43 78 f7 c6 0c 08 06        ......nCx.....
	[  +0.000838] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethe4b785c7
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e c7 3b a1 94 fd 08 06        ......^.;.....
	
	* 
	* ==> etcd [eb969b98501aa07fcc3fb122054c3aa65d6e26b39a5cae359f019788fbbc3d94] <==
	* 2021-08-13 20:48:18.390907 I | etcdserver: snapshot count = 10000
	2021-08-13 20:48:18.390924 I | etcdserver: advertise client URLs = https://192.168.76.2:2379
	2021-08-13 20:48:18.462863 I | etcdserver: restarting member ea7e25599daad906 in cluster 6f20f2c4b2fb5f8a at commit index 552
	2021-08-13 20:48:18.462953 I | raft: ea7e25599daad906 became follower at term 2
	2021-08-13 20:48:18.462967 I | raft: newRaft ea7e25599daad906 [peers: [], term: 2, commit: 552, applied: 0, lastindex: 552, lastterm: 2]
	2021-08-13 20:48:18.473780 W | auth: simple token is not cryptographically signed
	2021-08-13 20:48:18.475878 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	2021-08-13 20:48:18.476457 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2021-08-13 20:48:18.476547 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-13 20:48:18.476584 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-13 20:48:18.478581 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:48:18.478730 I | embed: listening for metrics on http://192.168.76.2:2381
	2021-08-13 20:48:18.478808 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 20:48:20.263923 I | raft: ea7e25599daad906 is starting a new election at term 2
	2021-08-13 20:48:20.263956 I | raft: ea7e25599daad906 became candidate at term 3
	2021-08-13 20:48:20.263979 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3
	2021-08-13 20:48:20.263991 I | raft: ea7e25599daad906 became leader at term 3
	2021-08-13 20:48:20.263998 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3
	2021-08-13 20:48:20.264923 I | embed: ready to serve client requests
	2021-08-13 20:48:20.265043 I | etcdserver: published {Name:old-k8s-version-20210813204214-13784 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2021-08-13 20:48:20.265072 I | embed: ready to serve client requests
	2021-08-13 20:48:20.266973 I | embed: serving client requests on 192.168.76.2:2379
	2021-08-13 20:48:20.266995 I | embed: serving client requests on 127.0.0.1:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	
	* 
	* ==> kernel <==
	*  20:49:20 up  1:32,  0 users,  load average: 0.65, 2.21, 2.08
	Linux old-k8s-version-20210813204214-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [05205c822850737a37cab612c32f44b5a2db19048dd2966025f9271d3a6c18b3] <==
	* I0813 20:49:07.304406       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:08.304567       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:08.304690       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:09.304873       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:09.304987       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:10.305135       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:10.305241       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:11.305405       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:11.305608       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:12.305802       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:12.305945       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:13.306095       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:13.306191       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:14.306344       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:14.306456       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:15.306620       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:15.306737       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:16.306904       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:16.307019       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:17.307195       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:17.307317       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:18.307493       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:18.307630       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:19.307788       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:19.307931       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [f0e9f48420d191b31eb588742c5afcfec0dcf6f3d64c41b3a983f3a0bceca151] <==
	* I0813 20:48:40.848193       1 controller_utils.go:1034] Caches are synced for PVC protection controller
	I0813 20:48:40.848225       1 controller_utils.go:1034] Caches are synced for stateful set controller
	I0813 20:48:40.848480       1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
	I0813 20:48:40.849461       1 controller_utils.go:1034] Caches are synced for taint controller
	I0813 20:48:40.849532       1 taint_manager.go:198] Starting NoExecuteTaintManager
	I0813 20:48:40.849567       1 node_lifecycle_controller.go:1159] Initializing eviction metric for zone: 
	W0813 20:48:40.849625       1 node_lifecycle_controller.go:833] Missing timestamp for Node old-k8s-version-20210813204214-13784. Assuming now as a timestamp.
	I0813 20:48:40.849711       1 node_lifecycle_controller.go:1059] Controller detected that zone  is now in state Normal.
	I0813 20:48:40.849761       1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20210813204214-13784", UID:"73415350-fc77-11eb-8f20-0242bfc25c59", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20210813204214-13784 event: Registered Node old-k8s-version-20210813204214-13784 in Controller
	I0813 20:48:40.851823       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"b603b5f2-fc77-11eb-8f20-0242bfc25c59", APIVersion:"apps/v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-dprvt
	I0813 20:48:40.865860       1 controller_utils.go:1034] Caches are synced for resource quota controller
	I0813 20:48:40.876373       1 controller_utils.go:1034] Caches are synced for persistent volume controller
	I0813 20:48:40.880506       1 controller_utils.go:1034] Caches are synced for deployment controller
	I0813 20:48:40.886357       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"d14e8bbd-fc77-11eb-b136-02429fe89262", APIVersion:"apps/v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
	I0813 20:48:40.886398       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"d14d9ac8-fc77-11eb-b136-02429fe89262", APIVersion:"apps/v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-5b494cc544 to 1
	I0813 20:48:40.890318       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"d753a3c5-fc77-11eb-b136-02429fe89262", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-2jdmv
	I0813 20:48:40.890347       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"d753a299-fc77-11eb-b136-02429fe89262", APIVersion:"apps/v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-2jx9q
	I0813 20:48:40.898257       1 controller_utils.go:1034] Caches are synced for ReplicationController controller
	I0813 20:48:40.898452       1 controller_utils.go:1034] Caches are synced for endpoint controller
	I0813 20:48:40.898813       1 controller_utils.go:1034] Caches are synced for daemon sets controller
	W0813 20:48:42.498523       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0813 20:48:42.498730       1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
	I0813 20:48:42.598948       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	E0813 20:49:10.300643       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:49:14.600384       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [05c24bd3cd45c1e189a79e4e9ba27a9f145567441a61ec98a59569cdde45b95e] <==
	* W0813 20:46:17.270192       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 20:46:17.280785       1 server_others.go:148] Using iptables Proxier.
	I0813 20:46:17.281454       1 server_others.go:178] Tearing down inactive rules.
	I0813 20:46:17.997879       1 server.go:555] Version: v1.14.0
	I0813 20:46:18.003605       1 config.go:202] Starting service config controller
	I0813 20:46:18.003774       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 20:46:18.003636       1 config.go:102] Starting endpoints config controller
	I0813 20:46:18.003822       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 20:46:18.104082       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 20:46:18.104168       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	W0813 20:48:25.073320       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 20:48:25.080063       1 server_others.go:148] Using iptables Proxier.
	I0813 20:48:25.080199       1 server_others.go:178] Tearing down inactive rules.
	I0813 20:48:25.549835       1 server.go:555] Version: v1.14.0
	I0813 20:48:25.555142       1 config.go:102] Starting endpoints config controller
	I0813 20:48:25.555187       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 20:48:25.555176       1 config.go:202] Starting service config controller
	I0813 20:48:25.555205       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 20:48:25.655356       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 20:48:25.655362       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [c9eaddd82f247ad2fabdaea849dae20fa4b638fb49c9c0ff051de61572d809bc] <==
	* E0813 20:45:53.977426       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:45:53.978410       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:45:53.979544       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:45:53.980576       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0813 20:45:55.760637       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 20:45:55.860784       1 controller_utils.go:1034] Caches are synced for scheduler controller
	I0813 20:48:18.906609       1 serving.go:319] Generated self-signed cert in-memory
	W0813 20:48:19.277566       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	W0813 20:48:19.277587       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	W0813 20:48:19.277600       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	I0813 20:48:19.281037       1 server.go:142] Version: v1.14.0
	I0813 20:48:19.281092       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0813 20:48:19.282226       1 authorization.go:47] Authorization is disabled
	W0813 20:48:19.282247       1 authentication.go:55] Authentication is disabled
	I0813 20:48:19.282262       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 20:48:19.282660       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 20:48:22.853878       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:22.878409       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:22.878508       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:22.878557       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:22.878580       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:22.878737       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:22.878776       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0813 20:48:24.684181       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 20:48:24.784361       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:48:07 UTC, end at Fri 2021-08-13 20:49:20 UTC. --
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.895362     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/d74eab74-fc77-11eb-b136-02429fe89262-tmp-dir") pod "metrics-server-8546d8b77b-dprvt" (UID: "d74eab74-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.895417     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-kf7vs" (UniqueName: "kubernetes.io/secret/d74eab74-fc77-11eb-b136-02429fe89262-metrics-server-token-kf7vs") pod "metrics-server-8546d8b77b-dprvt" (UID: "d74eab74-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.995716     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/d7544fce-fc77-11eb-b136-02429fe89262-tmp-volume") pod "dashboard-metrics-scraper-5b494cc544-2jx9q" (UID: "d7544fce-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.995764     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-d9mng" (UniqueName: "kubernetes.io/secret/d7544fce-fc77-11eb-b136-02429fe89262-kubernetes-dashboard-token-d9mng") pod "dashboard-metrics-scraper-5b494cc544-2jx9q" (UID: "d7544fce-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.995811     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/d75463dd-fc77-11eb-b136-02429fe89262-tmp-volume") pod "kubernetes-dashboard-5d8978d65d-2jdmv" (UID: "d75463dd-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.995970     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-d9mng" (UniqueName: "kubernetes.io/secret/d75463dd-fc77-11eb-b136-02429fe89262-kubernetes-dashboard-token-d9mng") pod "kubernetes-dashboard-5d8978d65d-2jdmv" (UID: "d75463dd-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.410302     957 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.410402     957 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.410482     957 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.410546     957 pod_workers.go:190] Error syncing pod d74eab74-fc77-11eb-b136-02429fe89262 ("metrics-server-8546d8b77b-dprvt_kube-system(d74eab74-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.476969     957 pod_workers.go:190] Error syncing pod d74eab74-fc77-11eb-b136-02429fe89262 ("metrics-server-8546d8b77b-dprvt_kube-system(d74eab74-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:48:47 old-k8s-version-20210813204214-13784 kubelet[957]: W0813 20:48:47.504819     957 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:48:49 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:49.491163     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:48:50 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:50.493681     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:48:53 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:53.376307     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:56.382667     957 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:56.382726     957 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:56.382812     957 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:56.382852     957 pod_workers.go:190] Error syncing pod d74eab74-fc77-11eb-b136-02429fe89262 ("metrics-server-8546d8b77b-dprvt_kube-system(d74eab74-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 kubelet[957]: W0813 20:49:07.539677     957 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 kubelet[957]: W0813 20:49:07.554478     957 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:49:08 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:49:08.522404     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:49:09 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:49:09.370549     957 pod_workers.go:190] Error syncing pod d74eab74-fc77-11eb-b136-02429fe89262 ("metrics-server-8546d8b77b-dprvt_kube-system(d74eab74-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:49:13 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:49:13.376270     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:49:17 old-k8s-version-20210813204214-13784 kubelet[957]: W0813 20:49:17.564142     957 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	
	* 
	* ==> kubernetes-dashboard [876a6d6b9a9eb01f62dad8251578db7b0149dbe238b3c7adf0b249795f40d22b] <==
	* 2021/08/13 20:48:41 Starting overwatch
	2021/08/13 20:48:41 Using namespace: kubernetes-dashboard
	2021/08/13 20:48:41 Using in-cluster config to connect to apiserver
	2021/08/13 20:48:41 Using secret token for csrf signing
	2021/08/13 20:48:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:48:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:48:41 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 20:48:41 Generating JWE encryption key
	2021/08/13 20:48:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:48:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:48:41 Initializing JWE encryption key from synchronized object
	2021/08/13 20:48:41 Creating in-cluster Sidecar client
	2021/08/13 20:48:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:48:41 Serving insecurely on HTTP port: 9090
	2021/08/13 20:49:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [1520fea7b1a89ce1c40c6d43daa322b26c485b3a2c2e7ac65b502ed6aadf1b30] <==
	* I0813 20:48:37.559419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:48:37.567221       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:48:37.567273       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:48:54.960197       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:48:54.960367       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813204214-13784_55fad62c-1d99-4684-b7fc-517d665195ce!
	I0813 20:48:54.960322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82147b3c-fc77-11eb-8f20-0242bfc25c59", APIVersion:"v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813204214-13784_55fad62c-1d99-4684-b7fc-517d665195ce became leader
	I0813 20:48:55.060595       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813204214-13784_55fad62c-1d99-4684-b7fc-517d665195ce!
	
	* 
	* ==> storage-provisioner [db200865c8d03f306cf7f2ba03a0989c7ae0759857ee0b23b800ab5178674057] <==
	* I0813 20:48:24.585202       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0813 20:48:24.586815       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813204214-13784 -n old-k8s-version-20210813204214-13784
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813204214-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-dprvt
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813204214-13784 describe pod metrics-server-8546d8b77b-dprvt
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813204214-13784 describe pod metrics-server-8546d8b77b-dprvt: exit status 1 (66.2825ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-dprvt" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813204214-13784 describe pod metrics-server-8546d8b77b-dprvt: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210813204214-13784
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210813204214-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34",
	        "Created": "2021-08-13T20:45:33.476318759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 247951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:48:07.359292704Z",
	            "FinishedAt": "2021-08-13T20:48:05.641626429Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34/hostname",
	        "HostsPath": "/var/lib/docker/containers/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34/hosts",
	        "LogPath": "/var/lib/docker/containers/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34/8207b4ce0a5218871ea51f6c2fa96b5721091e23430400fb664f5ea4bb635d34-json.log",
	        "Name": "/old-k8s-version-20210813204214-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210813204214-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210813204214-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af2e3db11e0ee3d4527a11456d24ce2827ae11213b9946f79ca95d7d7331a568-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af2e3db11e0ee3d4527a11456d24ce2827ae11213b9946f79ca95d7d7331a568/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af2e3db11e0ee3d4527a11456d24ce2827ae11213b9946f79ca95d7d7331a568/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af2e3db11e0ee3d4527a11456d24ce2827ae11213b9946f79ca95d7d7331a568/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210813204214-13784",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210813204214-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210813204214-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210813204214-13784",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210813204214-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "adf728383ecbb0b98196dcef87c68344d1c85cc657cbe0bc916f766d50414159",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/adf728383ecb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210813204214-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8207b4ce0a52"
	                    ],
	                    "NetworkID": "4f1a585227db4bb5503779d0d20a062df451f1e513337e542778d16c6563ea60",
	                    "EndpointID": "63ab1360b94579b088a51b505aaf473c522b67a25f59804cf07d8eb0aa26d826",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204214-13784 -n old-k8s-version-20210813204214-13784
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210813204214-13784 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| profile | list --output json                                | minikube                                        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:13 UTC | Fri, 13 Aug 2021 20:42:14 UTC |
	| delete  | -p pause-20210813203929-13784                     | pause-20210813203929-13784                      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:14 UTC | Fri, 13 Aug 2021 20:42:14 UTC |
	| delete  | -p                                                | kubernetes-upgrade-20210813204027-13784         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:13 UTC | Fri, 13 Aug 2021 20:42:16 UTC |
	|         | kubernetes-upgrade-20210813204027-13784           |                                                 |         |         |                               |                               |
	| delete  | -p                                                | stopped-upgrade-20210813204011-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:56 UTC | Fri, 13 Aug 2021 20:42:58 UTC |
	|         | stopped-upgrade-20210813204011-13784              |                                                 |         |         |                               |                               |
	| delete  | -p                                                | running-upgrade-20210813204143-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:04 UTC | Fri, 13 Aug 2021 20:44:07 UTC |
	|         | running-upgrade-20210813204143-13784              |                                                 |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210813204407-13784      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:07 UTC | Fri, 13 Aug 2021 20:44:07 UTC |
	|         | disable-driver-mounts-20210813204407-13784        |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:59 UTC | Fri, 13 Aug 2021 20:44:11 UTC |
	|         | embed-certs-20210813204258-13784                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=docker                                   |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:23 UTC | Fri, 13 Aug 2021 20:44:24 UTC |
	|         | embed-certs-20210813204258-13784                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:24 UTC | Fri, 13 Aug 2021 20:44:47 UTC |
	|         | embed-certs-20210813204258-13784                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:44:47 UTC |
	|         | embed-certs-20210813204258-13784                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:17 UTC | Fri, 13 Aug 2021 20:44:49 UTC |
	|         | no-preload-20210813204216-13784                   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |         |         |                               |                               |
	|         | --driver=docker                                   |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                          |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:01 UTC | Fri, 13 Aug 2021 20:45:02 UTC |
	|         | no-preload-20210813204216-13784                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:07 UTC | Fri, 13 Aug 2021 20:45:15 UTC |
	|         | default-k8s-different-port-20210813204407-13784   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                 |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:02 UTC | Fri, 13 Aug 2021 20:45:23 UTC |
	|         | no-preload-20210813204216-13784                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:45:23 UTC |
	|         | no-preload-20210813204216-13784                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:27 UTC | Fri, 13 Aug 2021 20:45:28 UTC |
	|         | default-k8s-different-port-20210813204407-13784   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:28 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:15 UTC | Fri, 13 Aug 2021 20:47:32 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:44 UTC | Fri, 13 Aug 2021 20:47:45 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                         |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784              |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784              | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:19 UTC | Fri, 13 Aug 2021 20:49:20 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:48:06
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:48:06.280255  247624 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:48:06.280377  247624 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:48:06.280391  247624 out.go:311] Setting ErrFile to fd 2...
	I0813 20:48:06.280396  247624 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:48:06.280547  247624 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:48:06.280903  247624 out.go:305] Setting JSON to false
	I0813 20:48:06.324507  247624 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5449,"bootTime":1628882237,"procs":354,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:48:06.324598  247624 start.go:121] virtualization: kvm guest
	I0813 20:48:06.329281  247624 out.go:177] * [old-k8s-version-20210813204214-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:48:06.330841  247624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:48:06.329460  247624 notify.go:169] Checking for updates...
	I0813 20:48:06.332337  247624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:48:06.333870  247624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:48:06.335251  247624 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:48:06.335679  247624 config.go:177] Loaded profile config "old-k8s-version-20210813204214-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:48:06.337625  247624 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 20:48:06.337670  247624 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:48:06.387537  247624 docker.go:132] docker version: linux-19.03.15
	I0813 20:48:06.387650  247624 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:48:06.473267  247624 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:48:06.424978688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:48:06.473385  247624 docker.go:244] overlay module found
	I0813 20:48:06.475570  247624 out.go:177] * Using the docker driver based on existing profile
	I0813 20:48:06.475598  247624 start.go:278] selected driver: docker
	I0813 20:48:06.475604  247624 start.go:751] validating driver "docker" against &{Name:old-k8s-version-20210813204214-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813204214-13784 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:48:06.475701  247624 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:48:06.475739  247624 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:48:06.475765  247624 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:48:06.477041  247624 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:48:06.477970  247624 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:48:06.559368  247624 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:48:06.515538545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:48:06.559486  247624 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:48:06.559514  247624 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:48:06.561467  247624 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:48:06.561591  247624 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:48:06.561621  247624 cni.go:93] Creating CNI manager for ""
	I0813 20:48:06.561639  247624 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:48:06.561654  247624 start_flags.go:277] config:
	{Name:old-k8s-version-20210813204214-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813204214-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:48:06.563344  247624 out.go:177] * Starting control plane node old-k8s-version-20210813204214-13784 in cluster old-k8s-version-20210813204214-13784
	I0813 20:48:06.563427  247624 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:48:06.564909  247624 out.go:177] * Pulling base image ...
	I0813 20:48:06.564951  247624 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:48:06.564983  247624 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:48:06.564995  247624 cache.go:56] Caching tarball of preloaded images
	I0813 20:48:06.565057  247624 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:48:06.565166  247624 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:48:06.565192  247624 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on crio
	I0813 20:48:06.565349  247624 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/config.json ...
	I0813 20:48:06.652798  247624 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:48:06.652823  247624 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:48:06.652837  247624 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:48:06.652889  247624 start.go:313] acquiring machines lock for old-k8s-version-20210813204214-13784: {Name:mk76ee894658213dd67f3cb3bd3522bcd5d4bdbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:48:06.653014  247624 start.go:317] acquired machines lock for "old-k8s-version-20210813204214-13784" in 76.644µs
	I0813 20:48:06.653036  247624 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:48:06.653044  247624 fix.go:55] fixHost starting: 
	I0813 20:48:06.653325  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:06.693403  247624 fix.go:108] recreateIfNeeded on old-k8s-version-20210813204214-13784: state=Stopped err=<nil>
	W0813 20:48:06.693436  247624 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:48:04.229060  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.229395  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:04.283195  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.284040  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.381359  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:08.382743  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:06.696001  247624 out.go:177] * Restarting existing docker container for "old-k8s-version-20210813204214-13784" ...
	I0813 20:48:06.696064  247624 cli_runner.go:115] Run: docker start old-k8s-version-20210813204214-13784
	I0813 20:48:07.366459  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:07.408365  247624 kic.go:420] container "old-k8s-version-20210813204214-13784" state is running.
	I0813 20:48:07.408749  247624 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210813204214-13784
	I0813 20:48:07.450987  247624 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/config.json ...
	I0813 20:48:07.451186  247624 machine.go:88] provisioning docker machine ...
	I0813 20:48:07.451207  247624 ubuntu.go:169] provisioning hostname "old-k8s-version-20210813204214-13784"
	I0813 20:48:07.451249  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:07.495106  247624 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:07.495305  247624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32960 <nil> <nil>}
	I0813 20:48:07.495329  247624 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210813204214-13784 && echo "old-k8s-version-20210813204214-13784" | sudo tee /etc/hostname
	I0813 20:48:07.495912  247624 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60726->127.0.0.1:32960: read: connection reset by peer
	I0813 20:48:10.649554  247624 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210813204214-13784
	
	I0813 20:48:10.649645  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:10.689131  247624 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:10.689339  247624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32960 <nil> <nil>}
	I0813 20:48:10.689364  247624 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210813204214-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210813204214-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210813204214-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:48:10.813413  247624 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:48:10.813438  247624 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:48:10.813464  247624 ubuntu.go:177] setting up certificates
	I0813 20:48:10.813476  247624 provision.go:83] configureAuth start
	I0813 20:48:10.813565  247624 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210813204214-13784
	I0813 20:48:10.853956  247624 provision.go:138] copyHostCerts
	I0813 20:48:10.854037  247624 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:48:10.854051  247624 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:48:10.854111  247624 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:48:10.854193  247624 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:48:10.854202  247624 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:48:10.854223  247624 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:48:10.854287  247624 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:48:10.854294  247624 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:48:10.854313  247624 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:48:10.854364  247624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210813204214-13784 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210813204214-13784]
	I0813 20:48:11.031308  247624 provision.go:172] copyRemoteCerts
	I0813 20:48:11.031365  247624 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:48:11.031400  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:11.073359  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:11.165003  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:48:11.181601  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0813 20:48:11.197424  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:48:11.213674  247624 provision.go:86] duration metric: configureAuth took 400.183851ms
	I0813 20:48:11.213702  247624 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:48:11.213871  247624 config.go:177] Loaded profile config "old-k8s-version-20210813204214-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:48:11.214018  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:08.729796  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.731255  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:11.256638  247624 main.go:130] libmachine: Using SSH client type: native
	I0813 20:48:11.256805  247624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32960 <nil> <nil>}
	I0813 20:48:11.256827  247624 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:48:11.749584  247624 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:48:11.749613  247624 machine.go:91] provisioned docker machine in 4.298413293s
	I0813 20:48:11.749626  247624 start.go:267] post-start starting for "old-k8s-version-20210813204214-13784" (driver="docker")
	I0813 20:48:11.749634  247624 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:48:11.749700  247624 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:48:11.749746  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:11.790794  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:11.884942  247624 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:48:11.887599  247624 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:48:11.887620  247624 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:48:11.887628  247624 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:48:11.887634  247624 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:48:11.887643  247624 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:48:11.887703  247624 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:48:11.887802  247624 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:48:11.887909  247624 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:48:11.894326  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:48:11.910279  247624 start.go:270] post-start completed in 160.638884ms
	I0813 20:48:11.910343  247624 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:48:11.910381  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:11.949868  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:12.037777  247624 fix.go:57] fixHost completed within 5.384720712s
	I0813 20:48:12.037815  247624 start.go:80] releasing machines lock for "old-k8s-version-20210813204214-13784", held for 5.384788215s
	I0813 20:48:12.037932  247624 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210813204214-13784
	I0813 20:48:12.077875  247624 ssh_runner.go:149] Run: systemctl --version
	I0813 20:48:12.077930  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:12.077935  247624 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:48:12.077992  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:12.118315  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:12.118600  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:12.213911  247624 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:48:12.359684  247624 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:48:12.368755  247624 docker.go:153] disabling docker service ...
	I0813 20:48:12.368806  247624 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:48:12.460980  247624 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:48:12.471785  247624 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:48:12.534686  247624 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:48:12.601869  247624 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:48:12.610851  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:48:12.623546  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0813 20:48:12.630923  247624 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:48:12.630965  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:48:12.638313  247624 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:48:12.644233  247624 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:48:12.644289  247624 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:48:12.650976  247624 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:48:12.656746  247624 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:48:12.714912  247624 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:48:12.724053  247624 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:48:12.724132  247624 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:48:12.727774  247624 start.go:413] Will wait 60s for crictl version
	I0813 20:48:12.727837  247624 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:48:12.758796  247624 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:48:12.758879  247624 ssh_runner.go:149] Run: crio --version
	I0813 20:48:12.823180  247624 ssh_runner.go:149] Run: crio --version
	I0813 20:48:08.284130  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.284262  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:12.783617  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:10.881189  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:13.381834  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:12.884580  247624 out.go:177] * Preparing Kubernetes v1.14.0 on CRI-O 1.20.3 ...
	I0813 20:48:12.884658  247624 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210813204214-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:48:12.923787  247624 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0813 20:48:12.927116  247624 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:48:12.936132  247624 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:48:12.936188  247624 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:48:12.963472  247624 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:48:12.963494  247624 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:48:12.963547  247624 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:48:12.985284  247624 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:48:12.985312  247624 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:48:12.985375  247624 ssh_runner.go:149] Run: crio config
	I0813 20:48:13.051955  247624 cni.go:93] Creating CNI manager for ""
	I0813 20:48:13.051982  247624 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:48:13.051994  247624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:48:13.052007  247624 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210813204214-13784 NodeName:old-k8s-version-20210813204214-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Client
CAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:48:13.052127  247624 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-20210813204214-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210813204214-13784
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:48:13.052219  247624 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-20210813204214-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813204214-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:48:13.052265  247624 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0813 20:48:13.059197  247624 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:48:13.059263  247624 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:48:13.065749  247624 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (639 bytes)
	I0813 20:48:13.077293  247624 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:48:13.089126  247624 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0813 20:48:13.100571  247624 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:48:13.103214  247624 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:48:13.111529  247624 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784 for IP: 192.168.76.2
	I0813 20:48:13.111578  247624 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:48:13.111597  247624 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:48:13.111655  247624 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.key
	I0813 20:48:13.111679  247624 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/apiserver.key.31bdca25
	I0813 20:48:13.111701  247624 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/proxy-client.key
	I0813 20:48:13.111807  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:48:13.111867  247624 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:48:13.111882  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:48:13.111916  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:48:13.111956  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:48:13.111994  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:48:13.112055  247624 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:48:13.112951  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:48:13.129200  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:48:13.144609  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:48:13.160140  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:48:13.175551  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:48:13.190989  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:48:13.206257  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:48:13.221623  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:48:13.237944  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:48:13.253436  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:48:13.269144  247624 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:48:13.284926  247624 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:48:13.296120  247624 ssh_runner.go:149] Run: openssl version
	I0813 20:48:13.300736  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:48:13.307616  247624 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:13.310435  247624 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:13.310488  247624 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:48:13.314856  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:48:13.320921  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:48:13.327644  247624 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:48:13.330480  247624 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:48:13.330512  247624 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:48:13.334846  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:48:13.340722  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:48:13.347333  247624 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:48:13.350159  247624 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:48:13.350196  247624 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:48:13.354589  247624 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:48:13.360555  247624 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210813204214-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813204214-13784 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReque
sted:false ExtraDisks:0}
	I0813 20:48:13.360654  247624 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:48:13.360686  247624 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:48:13.383613  247624 cri.go:76] found id: ""
	I0813 20:48:13.383668  247624 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:48:13.390113  247624 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:48:13.390135  247624 kubeadm.go:600] restartCluster start
	I0813 20:48:13.390177  247624 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:48:13.395888  247624 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:13.396973  247624 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210813204214-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:48:13.397504  247624 kubeconfig.go:128] "old-k8s-version-20210813204214-13784" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:48:13.398463  247624 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:48:13.401242  247624 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:48:13.407099  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:13.407138  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:13.419218  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:13.619588  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:13.619668  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:13.633308  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:13.819444  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:13.819506  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:13.832736  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.020018  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.020099  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.033344  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.219563  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.219649  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.233533  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.419710  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.419779  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.433118  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.619355  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.619423  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.633111  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:14.820251  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:14.820335  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:14.833893  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.020171  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.020241  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.033320  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.219597  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.219668  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.233368  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.419768  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.419854  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.433098  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.619344  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.619423  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.632845  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:15.820119  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:15.820189  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:15.833782  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.019995  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:16.020094  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:16.033862  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.220086  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:16.220182  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:13.230383  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:15.729280  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:15.284095  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:17.782645  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:15.881647  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:18.382914  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	W0813 20:48:16.234244  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.419445  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:16.419510  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:16.432969  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.432993  247624 api_server.go:164] Checking apiserver status ...
	I0813 20:48:16.433032  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:48:16.444939  247624 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:48:16.444963  247624 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:48:16.444970  247624 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:48:16.444981  247624 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:48:16.445019  247624 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:48:16.479950  247624 cri.go:76] found id: ""
	I0813 20:48:16.480015  247624 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:48:16.489159  247624 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:48:16.495994  247624 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5751 Aug 13 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Aug 13 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Aug 13 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Aug 13 20:45 /etc/kubernetes/scheduler.conf
	
	I0813 20:48:16.496048  247624 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:48:16.502354  247624 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:48:16.508405  247624 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:48:16.514950  247624 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:48:16.521346  247624 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:48:16.527636  247624 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:48:16.527656  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:16.674292  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:17.134744  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:17.245254  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:17.288162  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:17.360004  247624 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:48:17.360112  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:17.875270  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:18.376073  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:48:18.463698  247624 api_server.go:70] duration metric: took 1.103695361s to wait for apiserver process to appear ...
	I0813 20:48:18.463729  247624 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:48:18.463741  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:18.464169  247624 api_server.go:255] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0813 20:48:18.964871  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:17.729744  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:20.229284  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.230326  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:19.783623  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.283929  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.853517  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:48:22.853551  247624 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:48:22.964759  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:22.968920  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 20:48:22.968942  247624 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 20:48:23.465260  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:23.471242  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 20:48:23.471268  247624 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 20:48:23.964536  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:23.970198  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 20:48:23.970229  247624 api_server.go:101] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 20:48:24.464727  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:48:24.469935  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:48:24.479588  247624 api_server.go:139] control plane version: v1.14.0
	I0813 20:48:24.479657  247624 api_server.go:129] duration metric: took 6.01591953s to wait for apiserver health ...
	I0813 20:48:24.479681  247624 cni.go:93] Creating CNI manager for ""
	I0813 20:48:24.479690  247624 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:48:20.881226  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:22.882969  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:24.482948  247624 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:48:24.483018  247624 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:48:24.486528  247624 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0813 20:48:24.486549  247624 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:48:24.498573  247624 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:48:24.763615  247624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:48:24.773742  247624 system_pods.go:59] 8 kube-system pods found
	I0813 20:48:24.773773  247624 system_pods.go:61] "coredns-fb8b8dccf-556lc" [814e698b-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773781  247624 system_pods.go:61] "etcd-old-k8s-version-20210813204214-13784" [a49dccef-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773786  247624 system_pods.go:61] "kindnet-gwddx" [8143a261-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773792  247624 system_pods.go:61] "kube-apiserver-old-k8s-version-20210813204214-13784" [a5cf018d-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773798  247624 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210813204214-13784" [ccd414a5-fc77-11eb-b136-02429fe89262] Pending
	I0813 20:48:24.773811  247624 system_pods.go:61] "kube-proxy-97hnd" [8143a1df-fc77-11eb-8f20-0242bfc25c59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 20:48:24.773825  247624 system_pods.go:61] "kube-scheduler-old-k8s-version-20210813204214-13784" [a1a2d87c-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773833  247624 system_pods.go:61] "storage-provisioner" [8215e12f-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:48:24.773841  247624 system_pods.go:74] duration metric: took 10.205938ms to wait for pod list to return data ...
	I0813 20:48:24.773853  247624 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:48:24.776949  247624 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:48:24.776970  247624 node_conditions.go:123] node cpu capacity is 8
	I0813 20:48:24.776983  247624 node_conditions.go:105] duration metric: took 3.122261ms to run NodePressure ...
	I0813 20:48:24.777002  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:48:24.962972  247624 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 20:48:24.966701  247624 kubeadm.go:746] kubelet initialised
	I0813 20:48:24.966722  247624 kubeadm.go:747] duration metric: took 3.722351ms waiting for restarted kubelet to initialise ...
	I0813 20:48:24.966732  247624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:48:24.970304  247624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.981737  247624 pod_ready.go:92] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:24.981757  247624 pod_ready.go:81] duration metric: took 11.428229ms waiting for pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.981769  247624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.986014  247624 pod_ready.go:92] pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:24.986035  247624 pod_ready.go:81] duration metric: took 4.258023ms waiting for pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.986052  247624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.989710  247624 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:24.989728  247624 pod_ready.go:81] duration metric: took 3.666526ms waiting for pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.989739  247624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:26.172506  247624 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:26.172534  247624 pod_ready.go:81] duration metric: took 1.18278695s waiting for pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:26.172546  247624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:24.729443  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:27.229551  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:24.285035  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:26.783455  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:25.381170  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:27.381282  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:28.372240  247624 pod_ready.go:102] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:29.372288  247624 pod_ready.go:92] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:29.372315  247624 pod_ready.go:81] duration metric: took 3.199762729s waiting for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:29.372326  247624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:29.376413  247624 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:48:29.376432  247624 pod_ready.go:81] duration metric: took 4.098487ms waiting for pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:29.376443  247624 pod_ready.go:38] duration metric: took 4.409697623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:48:29.376462  247624 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:48:29.397564  247624 ops.go:34] apiserver oom_adj: 16
	I0813 20:48:29.397584  247624 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 20:48:29.397597  247624 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:48:29.420965  247624 kubeadm.go:604] restartCluster took 16.030818309s
	I0813 20:48:29.420986  247624 kubeadm.go:392] StartCluster complete in 16.060436511s
	I0813 20:48:29.421008  247624 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:48:29.421104  247624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:48:29.422712  247624 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:48:29.933985  247624 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813204214-13784" rescaled to 1
	I0813 20:48:29.934045  247624 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 20:48:29.936012  247624 out.go:177] * Verifying Kubernetes components...
	I0813 20:48:29.936079  247624 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:48:29.934095  247624 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:48:29.934134  247624 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:48:29.934301  247624 config.go:177] Loaded profile config "old-k8s-version-20210813204214-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	I0813 20:48:29.936210  247624 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936228  247624 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936235  247624 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936239  247624 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210813204214-13784"
	W0813 20:48:29.936244  247624 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:48:29.936245  247624 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936251  247624 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210813204214-13784"
	W0813 20:48:29.936255  247624 addons.go:147] addon dashboard should already be in state true
	W0813 20:48:29.936257  247624 addons.go:147] addon metrics-server should already be in state true
	I0813 20:48:29.936287  247624 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:48:29.936288  247624 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:48:29.936215  247624 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936486  247624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813204214-13784"
	I0813 20:48:29.936292  247624 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:48:29.936783  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:29.936816  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:29.936882  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:29.937141  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:30.001232  247624 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:48:30.002868  247624 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:48:30.002961  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:48:30.002973  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:48:30.003029  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:30.009789  247624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:48:30.008415  247624 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813204214-13784"
	W0813 20:48:30.009881  247624 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:48:30.009906  247624 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:48:30.009909  247624 host.go:66] Checking if "old-k8s-version-20210813204214-13784" exists ...
	I0813 20:48:30.009919  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:48:30.009967  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:30.010412  247624 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210813204214-13784 --format={{.State.Status}}
	I0813 20:48:30.017865  247624 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:48:30.017945  247624 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:48:30.017961  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:48:30.018040  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:30.033797  247624 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813204214-13784" to be "Ready" ...
	I0813 20:48:30.033952  247624 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:48:30.036376  247624 node_ready.go:49] node "old-k8s-version-20210813204214-13784" has status "Ready":"True"
	I0813 20:48:30.036394  247624 node_ready.go:38] duration metric: took 2.565834ms waiting for node "old-k8s-version-20210813204214-13784" to be "Ready" ...
	I0813 20:48:30.036405  247624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:48:30.042433  247624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace to be "Ready" ...
	I0813 20:48:30.061111  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:30.063475  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:30.071874  247624 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:48:30.071911  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:48:30.071963  247624 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210813204214-13784
	I0813 20:48:30.079028  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:30.114512  247624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813204214-13784/id_rsa Username:docker}
	I0813 20:48:30.158647  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:48:30.158669  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:48:30.163108  247624 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:48:30.171156  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:48:30.171179  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:48:30.175077  247624 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:48:30.175097  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:48:30.183604  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:48:30.183626  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:48:30.187978  247624 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:48:30.187995  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:48:30.196520  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:48:30.196545  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:48:30.201241  247624 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:48:30.201261  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:48:30.210162  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:48:30.210182  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:48:30.258135  247624 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:48:30.259266  247624 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:48:30.271151  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:48:30.271177  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:48:30.285366  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:48:30.285392  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:48:30.358111  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:48:30.358139  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:48:30.373229  247624 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:48:30.373255  247624 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:48:30.387993  247624 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:48:30.716958  247624 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210813204214-13784"
	I0813 20:48:30.836432  247624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 20:48:30.836461  247624 addons.go:344] enableAddons completed in 902.350782ms
	I0813 20:48:29.230090  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:31.728672  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:29.283870  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:31.783610  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:29.881087  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:32.381194  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:34.381388  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:32.052560  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:34.052825  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:36.053243  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:33.729991  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:36.229802  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:34.283551  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:36.783755  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:36.880556  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.881294  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.053829  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:40.055059  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:38.230069  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:40.728676  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:39.283302  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:41.284504  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:41.382701  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:43.881109  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.553186  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.561372  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:42.729355  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:44.730264  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:47.230313  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:43.783584  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:45.783881  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:45.882125  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:48.381404  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:46.563785  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.053875  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:49.729434  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:52.229100  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:48.284093  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:50.782799  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:52.782850  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:50.881295  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:52.881561  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:51.557330  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.052925  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.053467  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.230528  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.729710  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:54.783262  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:56.783300  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:55.381418  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:57.880755  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.554094  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:01.053879  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:58.729881  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:00.732184  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:59.283090  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:01.283459  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:48:59.881433  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:02.380600  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:04.381265  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.554673  247624 pod_ready.go:102] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.553647  247624 pod_ready.go:92] pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.553675  247624 pod_ready.go:81] duration metric: took 35.511214581s waiting for pod "coredns-fb8b8dccf-556lc" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.553689  247624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.557689  247624 pod_ready.go:92] pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.557707  247624 pod_ready.go:81] duration metric: took 4.010218ms waiting for pod "etcd-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.557718  247624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.561155  247624 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.561171  247624 pod_ready.go:81] duration metric: took 3.444956ms waiting for pod "kube-apiserver-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.561180  247624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.564821  247624 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.564835  247624 pod_ready.go:81] duration metric: took 3.649416ms waiting for pod "kube-controller-manager-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.564844  247624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.568289  247624 pod_ready.go:92] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.568306  247624 pod_ready.go:81] duration metric: took 3.456412ms waiting for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.568314  247624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.951947  247624 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:49:05.951969  247624 pod_ready.go:81] duration metric: took 383.647763ms waiting for pod "kube-scheduler-old-k8s-version-20210813204214-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:49:05.951980  247624 pod_ready.go:38] duration metric: took 35.915563837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:49:05.951999  247624 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:49:05.952039  247624 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:49:05.977021  247624 api_server.go:70] duration metric: took 36.042945555s to wait for apiserver process to appear ...
	I0813 20:49:05.977043  247624 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:49:05.977053  247624 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0813 20:49:05.982278  247624 api_server.go:265] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0813 20:49:05.982985  247624 api_server.go:139] control plane version: v1.14.0
	I0813 20:49:05.983029  247624 api_server.go:129] duration metric: took 5.980504ms to wait for apiserver health ...
	I0813 20:49:05.983038  247624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:49:06.153346  247624 system_pods.go:59] 9 kube-system pods found
	I0813 20:49:06.153374  247624 system_pods.go:61] "coredns-fb8b8dccf-556lc" [814e698b-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153379  247624 system_pods.go:61] "etcd-old-k8s-version-20210813204214-13784" [a49dccef-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153383  247624 system_pods.go:61] "kindnet-gwddx" [8143a261-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153394  247624 system_pods.go:61] "kube-apiserver-old-k8s-version-20210813204214-13784" [a5cf018d-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153398  247624 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210813204214-13784" [ccd414a5-fc77-11eb-b136-02429fe89262] Running
	I0813 20:49:06.153401  247624 system_pods.go:61] "kube-proxy-97hnd" [8143a1df-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153406  247624 system_pods.go:61] "kube-scheduler-old-k8s-version-20210813204214-13784" [a1a2d87c-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153413  247624 system_pods.go:61] "metrics-server-8546d8b77b-dprvt" [d74eab74-fc77-11eb-b136-02429fe89262] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:49:06.153419  247624 system_pods.go:61] "storage-provisioner" [8215e12f-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.153425  247624 system_pods.go:74] duration metric: took 170.381598ms to wait for pod list to return data ...
	I0813 20:49:06.153437  247624 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:49:06.351595  247624 default_sa.go:45] found service account: "default"
	I0813 20:49:06.351618  247624 default_sa.go:55] duration metric: took 198.175573ms for default service account to be created ...
	I0813 20:49:06.351626  247624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:49:06.554202  247624 system_pods.go:86] 9 kube-system pods found
	I0813 20:49:06.554236  247624 system_pods.go:89] "coredns-fb8b8dccf-556lc" [814e698b-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554245  247624 system_pods.go:89] "etcd-old-k8s-version-20210813204214-13784" [a49dccef-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554251  247624 system_pods.go:89] "kindnet-gwddx" [8143a261-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554260  247624 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813204214-13784" [a5cf018d-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554268  247624 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813204214-13784" [ccd414a5-fc77-11eb-b136-02429fe89262] Running
	I0813 20:49:06.554280  247624 system_pods.go:89] "kube-proxy-97hnd" [8143a1df-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554287  247624 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813204214-13784" [a1a2d87c-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554298  247624 system_pods.go:89] "metrics-server-8546d8b77b-dprvt" [d74eab74-fc77-11eb-b136-02429fe89262] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:49:06.554307  247624 system_pods.go:89] "storage-provisioner" [8215e12f-fc77-11eb-8f20-0242bfc25c59] Running
	I0813 20:49:06.554319  247624 system_pods.go:126] duration metric: took 202.688336ms to wait for k8s-apps to be running ...
	I0813 20:49:06.554335  247624 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:49:06.554394  247624 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:49:06.564011  247624 system_svc.go:56] duration metric: took 9.671803ms WaitForService to wait for kubelet.
	I0813 20:49:06.564037  247624 kubeadm.go:547] duration metric: took 36.629963456s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:49:06.564064  247624 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:49:06.751768  247624 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:49:06.751791  247624 node_conditions.go:123] node cpu capacity is 8
	I0813 20:49:06.751806  247624 node_conditions.go:105] duration metric: took 187.737131ms to run NodePressure ...
	I0813 20:49:06.751818  247624 start.go:231] waiting for startup goroutines ...
	I0813 20:49:06.801010  247624 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0813 20:49:06.803519  247624 out.go:177] 
	W0813 20:49:06.803779  247624 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0813 20:49:06.805247  247624 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:49:06.806842  247624 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210813204214-13784" cluster and "default" namespace by default
	I0813 20:49:03.229347  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:05.229979  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:03.783900  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:06.283113  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:06.881861  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:09.380594  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:07.729750  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:10.229721  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:08.283866  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:10.782970  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.783580  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:11.880660  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:14.380529  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:12.730091  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:15.228721  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.230083  228026 pod_ready.go:102] pod "metrics-server-7c784ccb57-bk4h6" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:15.283194  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:17.320017  233224 pod_ready.go:102] pod "metrics-server-7c784ccb57-5jlhs" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:16.380711  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:49:18.381817  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:48:07 UTC, end at Fri 2021-08-13 20:49:21 UTC. --
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.501209391Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,RepoTags:[k8s.gcr.io/coredns:1.3.1],RepoDigests:[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns@sha256:638adb0319813f2479ba3642bbe37136db8cf363b48fb3eb7dc8db634d8d5a5b],Size_:40535007,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f44c6b69-84d3-4d58-bb06-d1c062830026 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.501904241Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.3.1" id=2e89ffb6-1e40-43cd-a8ad-92b784aecfe1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.502569424Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,RepoTags:[k8s.gcr.io/coredns:1.3.1],RepoDigests:[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns@sha256:638adb0319813f2479ba3642bbe37136db8cf363b48fb3eb7dc8db634d8d5a5b],Size_:40535007,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2e89ffb6-1e40-43cd-a8ad-92b784aecfe1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.503314662Z" level=info msg="Creating container: kube-system/coredns-fb8b8dccf-556lc/coredns" id=87fd1137-37e6-4d18-a8a5-f1c26d8f3d57 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.515434297Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/75658946d2e527aa23f1ae73e7425bd91739bef836d70fcd13a8b707656a6232/merged/etc/passwd: no such file or directory"
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.515465237Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/75658946d2e527aa23f1ae73e7425bd91739bef836d70fcd13a8b707656a6232/merged/etc/group: no such file or directory"
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.685047580Z" level=info msg="Created container 67b0bb98fb4778bb265f7c6a281ed4685496b7db464383984a767e2e8fb2e89f: kube-system/coredns-fb8b8dccf-556lc/coredns" id=87fd1137-37e6-4d18-a8a5-f1c26d8f3d57 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.685635840Z" level=info msg="Starting container: 67b0bb98fb4778bb265f7c6a281ed4685496b7db464383984a767e2e8fb2e89f" id=d5d4b327-1f13-4bce-a31f-e5b7f3f7332b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:48:54 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:54.696805052Z" level=info msg="Started container 67b0bb98fb4778bb265f7c6a281ed4685496b7db464383984a767e2e8fb2e89f: kube-system/coredns-fb8b8dccf-556lc/coredns" id=d5d4b327-1f13-4bce-a31f-e5b7f3f7332b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:56.370149746Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=82412610-204f-4f0c-a6ae-9e41379e8d73 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:56.370425670Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=82412610-204f-4f0c-a6ae-9e41379e8d73 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:56.371013049Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=a5fdf84d-3c2b-4bf9-988a-6ed018fcc108 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:48:56.377784613Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.370285225Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=87efa43b-b3ed-4f98-9832-91336135b201 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.371879467Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=87efa43b-b3ed-4f98-9832-91336135b201 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.372422210Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=57ec3908-8d4b-49cb-aad1-7c142a045aea name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.373772812Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=57ec3908-8d4b-49cb-aad1-7c142a045aea name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.374378970Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-2jx9q/dashboard-metrics-scraper" id=a8827247-fea8-410e-bf53-369cb0f03972 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.530447633Z" level=info msg="Created container c44b552369117529c015981632f911c5880cc514b50a6391accd679475153c18: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-2jx9q/dashboard-metrics-scraper" id=a8827247-fea8-410e-bf53-369cb0f03972 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.530901429Z" level=info msg="Starting container: c44b552369117529c015981632f911c5880cc514b50a6391accd679475153c18" id=76949d97-de3a-4a84-81de-02a46188c3d0 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:07.560450918Z" level=info msg="Started container c44b552369117529c015981632f911c5880cc514b50a6391accd679475153c18: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-2jx9q/dashboard-metrics-scraper" id=76949d97-de3a-4a84-81de-02a46188c3d0 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:49:08 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:08.522455783Z" level=info msg="Removing container: 21c7f944955ffd23ab4e4a0f45e83016beccea4d618f23bb6ad5eb63d884de4e" id=a4230441-ae99-4593-a72b-eb2e0573df44 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:49:08 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:08.558211054Z" level=info msg="Removed container 21c7f944955ffd23ab4e4a0f45e83016beccea4d618f23bb6ad5eb63d884de4e: kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-2jx9q/dashboard-metrics-scraper" id=a4230441-ae99-4593-a72b-eb2e0573df44 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:49:09 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:09.370089831Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=d4206612-cb91-41f7-b430-d92fd34840c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:49:09 old-k8s-version-20210813204214-13784 crio[389]: time="2021-08-13 20:49:09.370337591Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=d4206612-cb91-41f7-b430-d92fd34840c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID
	c44b552369117       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   14 seconds ago       Exited              dashboard-metrics-scraper   2                   8c495a1818306
	67b0bb98fb477       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   27 seconds ago       Running             coredns                     1                   6dc7107cb9773
	876a6d6b9a9eb       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   40 seconds ago       Running             kubernetes-dashboard        0                   6e7c77f8b6aa0
	1520fea7b1a89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   44 seconds ago       Running             storage-provisioner         2                   cfbe586c72f2b
	05c24bd3cd45c       5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d   56 seconds ago       Running             kube-proxy                  0                   7574f10e78dc5
	db200865c8d03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   57 seconds ago       Exited              storage-provisioner         1                   cfbe586c72f2b
	ca6d391b3893d       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c   57 seconds ago       Exited              coredns                     0                   6dc7107cb9773
	a3debb5e67085       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   57 seconds ago       Running             busybox                     0                   9e8de2b4b107e
	cc1c4fc69c79b       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   57 seconds ago       Running             kindnet-cni                 0                   64c4bc16ab677
	eb969b98501aa       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d   About a minute ago   Running             etcd                        0                   9bbe35ae5a5c4
	05205c8228507       ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6   About a minute ago   Running             kube-apiserver              0                   865f284b87841
	f0e9f48420d19       b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150   About a minute ago   Running             kube-controller-manager     0                   4c182d158cb0b
	c9eaddd82f247       00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492   About a minute ago   Running             kube-scheduler              0                   99f3465d1e237
	
	* 
	* ==> coredns [67b0bb98fb4778bb265f7c6a281ed4685496b7db464383984a767e2e8fb2e89f] <==
	* .:53
	2021-08-13T20:48:54.816Z [INFO] CoreDNS-1.3.1
	2021-08-13T20:48:54.816Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T20:48:54.816Z [INFO] plugin/reload: Running configuration MD5 = 08b08abdbcbace0be59f7a292f5ad181
	
	* 
	* ==> coredns [ca6d391b3893d9aa6fc3eefb859f5bac4e008775a6fd15c2024f7f88b33609ff] <==
	* .:53
	2021-08-13T20:46:42.268Z [INFO] CoreDNS-1.3.1
	2021-08-13T20:46:42.268Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T20:46:42.268Z [INFO] plugin/reload: Running configuration MD5 = 08b08abdbcbace0be59f7a292f5ad181
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2021-08-13T20:48:29.267Z [INFO] CoreDNS-1.3.1
	2021-08-13T20:48:29.267Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T20:48:29.268Z [INFO] plugin/reload: Running configuration MD5 = 08b08abdbcbace0be59f7a292f5ad181
	E0813 20:48:54.268215       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 20:48:54.268215       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-556lc.unknownuser.log.ERROR.20210813-204854.1: no such file or directory
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813204214-13784
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210813204214-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=old-k8s-version-20210813204214-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_45_58_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:45:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:48:23 +0000   Fri, 13 Aug 2021 20:45:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:48:23 +0000   Fri, 13 Aug 2021 20:45:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:48:23 +0000   Fri, 13 Aug 2021 20:45:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:48:23 +0000   Fri, 13 Aug 2021 20:45:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20210813204214-13784
	Capacity:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	System Info:
	 Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	 System UUID:                958edac4-ce8f-4ebc-810e-7874212ae9be
	 Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	 Kernel Version:             4.9.0-16-amd64
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.20.3
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (12 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                coredns-fb8b8dccf-556lc                                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m5s
	  kube-system                etcd-old-k8s-version-20210813204214-13784                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                kindnet-gwddx                                                   100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m5s
	  kube-system                kube-apiserver-old-k8s-version-20210813204214-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                kube-controller-manager-old-k8s-version-20210813204214-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                kube-proxy-97hnd                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                kube-scheduler-old-k8s-version-20210813204214-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                metrics-server-8546d8b77b-dprvt                                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         41s
	  kube-system                storage-provisioner                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-2jx9q                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-2jdmv                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                              Message
	  ----    ------                   ----                   ----                                              -------
	  Normal  NodeHasSufficientMemory  3m35s (x8 over 3m36s)  kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s (x8 over 3m36s)  kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s (x8 over 3m36s)  kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m3s                   kube-proxy, old-k8s-version-20210813204214-13784  Starting kube-proxy.
	  Normal  Starting                 64s                    kubelet, old-k8s-version-20210813204214-13784     Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)      kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)      kubelet, old-k8s-version-20210813204214-13784     Node old-k8s-version-20210813204214-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 56s                    kube-proxy, old-k8s-version-20210813204214-13784  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000005] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000001] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +2.011829] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000002] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000011] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000002] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +4.159725] net_ratelimit: 1 callbacks suppressed
	[  +0.000002] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000003] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000004] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000003] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000004] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000001] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +8.191387] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000002] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000003] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000001] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +0.000004] IPv4: martian source 10.158.0.4 from 10.244.0.2, on dev br-4f1a585227db
	[  +0.000001] ll header: 00000000: 02 42 97 d2 ae d8 02 42 c0 a8 4c 02 08 00        .B.....B..L...
	[  +1.556850] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethbb85f246
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 9e 18 ea a0 26 43 08 06        ..........&C..
	[  +0.083664] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth492f01f6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 6e 43 78 f7 c6 0c 08 06        ......nCx.....
	[  +0.000838] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethe4b785c7
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e c7 3b a1 94 fd 08 06        ......^.;.....
	
	* 
	* ==> etcd [eb969b98501aa07fcc3fb122054c3aa65d6e26b39a5cae359f019788fbbc3d94] <==
	* 2021-08-13 20:48:18.390907 I | etcdserver: snapshot count = 10000
	2021-08-13 20:48:18.390924 I | etcdserver: advertise client URLs = https://192.168.76.2:2379
	2021-08-13 20:48:18.462863 I | etcdserver: restarting member ea7e25599daad906 in cluster 6f20f2c4b2fb5f8a at commit index 552
	2021-08-13 20:48:18.462953 I | raft: ea7e25599daad906 became follower at term 2
	2021-08-13 20:48:18.462967 I | raft: newRaft ea7e25599daad906 [peers: [], term: 2, commit: 552, applied: 0, lastindex: 552, lastterm: 2]
	2021-08-13 20:48:18.473780 W | auth: simple token is not cryptographically signed
	2021-08-13 20:48:18.475878 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	2021-08-13 20:48:18.476457 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2021-08-13 20:48:18.476547 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-13 20:48:18.476584 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-13 20:48:18.478581 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:48:18.478730 I | embed: listening for metrics on http://192.168.76.2:2381
	2021-08-13 20:48:18.478808 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 20:48:20.263923 I | raft: ea7e25599daad906 is starting a new election at term 2
	2021-08-13 20:48:20.263956 I | raft: ea7e25599daad906 became candidate at term 3
	2021-08-13 20:48:20.263979 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3
	2021-08-13 20:48:20.263991 I | raft: ea7e25599daad906 became leader at term 3
	2021-08-13 20:48:20.263998 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3
	2021-08-13 20:48:20.264923 I | embed: ready to serve client requests
	2021-08-13 20:48:20.265043 I | etcdserver: published {Name:old-k8s-version-20210813204214-13784 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2021-08-13 20:48:20.265072 I | embed: ready to serve client requests
	2021-08-13 20:48:20.266973 I | embed: serving client requests on 192.168.76.2:2379
	2021-08-13 20:48:20.266995 I | embed: serving client requests on 127.0.0.1:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	
	* 
	* ==> kernel <==
	*  20:49:22 up  1:32,  0 users,  load average: 0.65, 2.21, 2.08
	Linux old-k8s-version-20210813204214-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [05205c822850737a37cab612c32f44b5a2db19048dd2966025f9271d3a6c18b3] <==
	* I0813 20:49:09.304987       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:10.305135       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:10.305241       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:11.305405       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:11.305608       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:12.305802       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:12.305945       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:13.306095       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:13.306191       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:14.306344       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:14.306456       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:15.306620       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:15.306737       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:16.306904       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:16.307019       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:17.307195       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:17.307317       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:18.307493       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:18.307630       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:19.307788       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:19.307931       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:20.308102       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:20.308227       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 20:49:21.308411       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 20:49:21.308540       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [f0e9f48420d191b31eb588742c5afcfec0dcf6f3d64c41b3a983f3a0bceca151] <==
	* I0813 20:48:40.848193       1 controller_utils.go:1034] Caches are synced for PVC protection controller
	I0813 20:48:40.848225       1 controller_utils.go:1034] Caches are synced for stateful set controller
	I0813 20:48:40.848480       1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
	I0813 20:48:40.849461       1 controller_utils.go:1034] Caches are synced for taint controller
	I0813 20:48:40.849532       1 taint_manager.go:198] Starting NoExecuteTaintManager
	I0813 20:48:40.849567       1 node_lifecycle_controller.go:1159] Initializing eviction metric for zone: 
	W0813 20:48:40.849625       1 node_lifecycle_controller.go:833] Missing timestamp for Node old-k8s-version-20210813204214-13784. Assuming now as a timestamp.
	I0813 20:48:40.849711       1 node_lifecycle_controller.go:1059] Controller detected that zone  is now in state Normal.
	I0813 20:48:40.849761       1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20210813204214-13784", UID:"73415350-fc77-11eb-8f20-0242bfc25c59", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20210813204214-13784 event: Registered Node old-k8s-version-20210813204214-13784 in Controller
	I0813 20:48:40.851823       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"b603b5f2-fc77-11eb-8f20-0242bfc25c59", APIVersion:"apps/v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-dprvt
	I0813 20:48:40.865860       1 controller_utils.go:1034] Caches are synced for resource quota controller
	I0813 20:48:40.876373       1 controller_utils.go:1034] Caches are synced for persistent volume controller
	I0813 20:48:40.880506       1 controller_utils.go:1034] Caches are synced for deployment controller
	I0813 20:48:40.886357       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"d14e8bbd-fc77-11eb-b136-02429fe89262", APIVersion:"apps/v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
	I0813 20:48:40.886398       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"d14d9ac8-fc77-11eb-b136-02429fe89262", APIVersion:"apps/v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-5b494cc544 to 1
	I0813 20:48:40.890318       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"d753a3c5-fc77-11eb-b136-02429fe89262", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-2jdmv
	I0813 20:48:40.890347       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"d753a299-fc77-11eb-b136-02429fe89262", APIVersion:"apps/v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-2jx9q
	I0813 20:48:40.898257       1 controller_utils.go:1034] Caches are synced for ReplicationController controller
	I0813 20:48:40.898452       1 controller_utils.go:1034] Caches are synced for endpoint controller
	I0813 20:48:40.898813       1 controller_utils.go:1034] Caches are synced for daemon sets controller
	W0813 20:48:42.498523       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0813 20:48:42.498730       1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
	I0813 20:48:42.598948       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	E0813 20:49:10.300643       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:49:14.600384       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [05c24bd3cd45c1e189a79e4e9ba27a9f145567441a61ec98a59569cdde45b95e] <==
	* W0813 20:46:17.270192       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 20:46:17.280785       1 server_others.go:148] Using iptables Proxier.
	I0813 20:46:17.281454       1 server_others.go:178] Tearing down inactive rules.
	I0813 20:46:17.997879       1 server.go:555] Version: v1.14.0
	I0813 20:46:18.003605       1 config.go:202] Starting service config controller
	I0813 20:46:18.003774       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 20:46:18.003636       1 config.go:102] Starting endpoints config controller
	I0813 20:46:18.003822       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 20:46:18.104082       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 20:46:18.104168       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	W0813 20:48:25.073320       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 20:48:25.080063       1 server_others.go:148] Using iptables Proxier.
	I0813 20:48:25.080199       1 server_others.go:178] Tearing down inactive rules.
	I0813 20:48:25.549835       1 server.go:555] Version: v1.14.0
	I0813 20:48:25.555142       1 config.go:102] Starting endpoints config controller
	I0813 20:48:25.555187       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 20:48:25.555176       1 config.go:202] Starting service config controller
	I0813 20:48:25.555205       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 20:48:25.655356       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 20:48:25.655362       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [c9eaddd82f247ad2fabdaea849dae20fa4b638fb49c9c0ff051de61572d809bc] <==
	* E0813 20:45:53.977426       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:45:53.978410       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:45:53.979544       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:45:53.980576       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0813 20:45:55.760637       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 20:45:55.860784       1 controller_utils.go:1034] Caches are synced for scheduler controller
	I0813 20:48:18.906609       1 serving.go:319] Generated self-signed cert in-memory
	W0813 20:48:19.277566       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	W0813 20:48:19.277587       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	W0813 20:48:19.277600       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	I0813 20:48:19.281037       1 server.go:142] Version: v1.14.0
	I0813 20:48:19.281092       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0813 20:48:19.282226       1 authorization.go:47] Authorization is disabled
	W0813 20:48:19.282247       1 authentication.go:55] Authentication is disabled
	I0813 20:48:19.282262       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 20:48:19.282660       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 20:48:22.853878       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:48:22.878409       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:48:22.878508       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:48:22.878557       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:48:22.878580       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:48:22.878737       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:48:22.878776       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0813 20:48:24.684181       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 20:48:24.784361       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:48:07 UTC, end at Fri 2021-08-13 20:49:22 UTC. --
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.895362     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/d74eab74-fc77-11eb-b136-02429fe89262-tmp-dir") pod "metrics-server-8546d8b77b-dprvt" (UID: "d74eab74-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.895417     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-kf7vs" (UniqueName: "kubernetes.io/secret/d74eab74-fc77-11eb-b136-02429fe89262-metrics-server-token-kf7vs") pod "metrics-server-8546d8b77b-dprvt" (UID: "d74eab74-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.995716     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/d7544fce-fc77-11eb-b136-02429fe89262-tmp-volume") pod "dashboard-metrics-scraper-5b494cc544-2jx9q" (UID: "d7544fce-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.995764     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-d9mng" (UniqueName: "kubernetes.io/secret/d7544fce-fc77-11eb-b136-02429fe89262-kubernetes-dashboard-token-d9mng") pod "dashboard-metrics-scraper-5b494cc544-2jx9q" (UID: "d7544fce-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.995811     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/d75463dd-fc77-11eb-b136-02429fe89262-tmp-volume") pod "kubernetes-dashboard-5d8978d65d-2jdmv" (UID: "d75463dd-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:40 old-k8s-version-20210813204214-13784 kubelet[957]: I0813 20:48:40.995970     957 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-d9mng" (UniqueName: "kubernetes.io/secret/d75463dd-fc77-11eb-b136-02429fe89262-kubernetes-dashboard-token-d9mng") pod "kubernetes-dashboard-5d8978d65d-2jdmv" (UID: "d75463dd-fc77-11eb-b136-02429fe89262")
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.410302     957 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.410402     957 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.410482     957 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.410546     957 pod_workers.go:190] Error syncing pod d74eab74-fc77-11eb-b136-02429fe89262 ("metrics-server-8546d8b77b-dprvt_kube-system(d74eab74-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 13 20:48:41 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:41.476969     957 pod_workers.go:190] Error syncing pod d74eab74-fc77-11eb-b136-02429fe89262 ("metrics-server-8546d8b77b-dprvt_kube-system(d74eab74-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:48:47 old-k8s-version-20210813204214-13784 kubelet[957]: W0813 20:48:47.504819     957 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:48:49 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:49.491163     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:48:50 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:50.493681     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:48:53 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:53.376307     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:56.382667     957 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:56.382726     957 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:56.382812     957 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:48:56 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:48:56.382852     957 pod_workers.go:190] Error syncing pod d74eab74-fc77-11eb-b136-02429fe89262 ("metrics-server-8546d8b77b-dprvt_kube-system(d74eab74-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 kubelet[957]: W0813 20:49:07.539677     957 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:49:07 old-k8s-version-20210813204214-13784 kubelet[957]: W0813 20:49:07.554478     957 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:49:08 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:49:08.522404     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:49:09 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:49:09.370549     957 pod_workers.go:190] Error syncing pod d74eab74-fc77-11eb-b136-02429fe89262 ("metrics-server-8546d8b77b-dprvt_kube-system(d74eab74-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:49:13 old-k8s-version-20210813204214-13784 kubelet[957]: E0813 20:49:13.376270     957 pod_workers.go:190] Error syncing pod d7544fce-fc77-11eb-b136-02429fe89262 ("dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-2jx9q_kubernetes-dashboard(d7544fce-fc77-11eb-b136-02429fe89262)"
	Aug 13 20:49:17 old-k8s-version-20210813204214-13784 kubelet[957]: W0813 20:49:17.564142     957 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
	
	* 
	* ==> kubernetes-dashboard [876a6d6b9a9eb01f62dad8251578db7b0149dbe238b3c7adf0b249795f40d22b] <==
	* 2021/08/13 20:48:41 Starting overwatch
	2021/08/13 20:48:41 Using namespace: kubernetes-dashboard
	2021/08/13 20:48:41 Using in-cluster config to connect to apiserver
	2021/08/13 20:48:41 Using secret token for csrf signing
	2021/08/13 20:48:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:48:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:48:41 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 20:48:41 Generating JWE encryption key
	2021/08/13 20:48:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:48:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:48:41 Initializing JWE encryption key from synchronized object
	2021/08/13 20:48:41 Creating in-cluster Sidecar client
	2021/08/13 20:48:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:48:41 Serving insecurely on HTTP port: 9090
	2021/08/13 20:49:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [1520fea7b1a89ce1c40c6d43daa322b26c485b3a2c2e7ac65b502ed6aadf1b30] <==
	* I0813 20:48:37.559419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:48:37.567221       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:48:37.567273       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:48:54.960197       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:48:54.960367       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813204214-13784_55fad62c-1d99-4684-b7fc-517d665195ce!
	I0813 20:48:54.960322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82147b3c-fc77-11eb-8f20-0242bfc25c59", APIVersion:"v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813204214-13784_55fad62c-1d99-4684-b7fc-517d665195ce became leader
	I0813 20:48:55.060595       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813204214-13784_55fad62c-1d99-4684-b7fc-517d665195ce!
	
	* 
	* ==> storage-provisioner [db200865c8d03f306cf7f2ba03a0989c7ae0759857ee0b23b800ab5178674057] <==
	* I0813 20:48:24.585202       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0813 20:48:24.586815       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813204214-13784 -n old-k8s-version-20210813204214-13784
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813204214-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-dprvt
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813204214-13784 describe pod metrics-server-8546d8b77b-dprvt
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813204214-13784 describe pod metrics-server-8546d8b77b-dprvt: exit status 1 (65.242905ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-dprvt" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813204214-13784 describe pod metrics-server-8546d8b77b-dprvt: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210813204258-13784 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20210813204258-13784 --alsologtostderr -v=1: exit status 80 (1.973941066s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210813204258-13784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:51:00.805058  267666 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:51:00.805157  267666 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:00.805165  267666 out.go:311] Setting ErrFile to fd 2...
	I0813 20:51:00.805169  267666 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:00.805263  267666 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:51:00.805433  267666 out.go:305] Setting JSON to false
	I0813 20:51:00.805454  267666 mustload.go:65] Loading cluster: embed-certs-20210813204258-13784
	I0813 20:51:00.805796  267666 config.go:177] Loaded profile config "embed-certs-20210813204258-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:00.806173  267666 cli_runner.go:115] Run: docker container inspect embed-certs-20210813204258-13784 --format={{.State.Status}}
	I0813 20:51:00.847582  267666 host.go:66] Checking if "embed-certs-20210813204258-13784" exists ...
	I0813 20:51:00.848365  267666 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210813204258-13784 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:51:00.850811  267666 out.go:177] * Pausing node embed-certs-20210813204258-13784 ... 
	I0813 20:51:00.850846  267666 host.go:66] Checking if "embed-certs-20210813204258-13784" exists ...
	I0813 20:51:00.851101  267666 ssh_runner.go:149] Run: systemctl --version
	I0813 20:51:00.851137  267666 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813204258-13784
	I0813 20:51:00.893335  267666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32940 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813204258-13784/id_rsa Username:docker}
	I0813 20:51:00.993288  267666 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:01.001962  267666 pause.go:50] kubelet running: true
	I0813 20:51:01.002008  267666 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:51:01.154768  267666 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:51:01.154871  267666 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:51:01.226855  267666 cri.go:76] found id: "8bf1e8aedcbf30a5946a03d983f8d7326ecc43ed49afcf622dda7dde80ce4e32"
	I0813 20:51:01.226880  267666 cri.go:76] found id: "99a064dab5bd08de84442cabfd73b4db0b7b9b99488cc7cac221ce0f99a85408"
	I0813 20:51:01.226885  267666 cri.go:76] found id: "49225baf7b8d7b1d6a19de833aedc8509f3345bd949135a8f7889e8bbd86ae89"
	I0813 20:51:01.226889  267666 cri.go:76] found id: "7f45a57dc9d5cc0f5abd548d7f1d2d27afc05c2ba5f70f39bca2c5cd0e101dd8"
	I0813 20:51:01.226893  267666 cri.go:76] found id: "d746a120d4eae768e04d4b51465b26b36e4c86d2b8ef44f610ca5355d595a2b4"
	I0813 20:51:01.226896  267666 cri.go:76] found id: "fec203c8ef0a0fa2c640aead10bd6dfb3c5b18ba85d4372df5349b59b289a3ba"
	I0813 20:51:01.226900  267666 cri.go:76] found id: "c3a394b8c1f401d1467ee22fffd5f729b8b442b8afffcff13e2e2fb2dcff22fc"
	I0813 20:51:01.226903  267666 cri.go:76] found id: "e50f48444db7d426c632bb681fc42b2cdeb9d5b5758f99b38f3d97785f7a98fb"
	I0813 20:51:01.226906  267666 cri.go:76] found id: "0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4"
	I0813 20:51:01.226912  267666 cri.go:76] found id: "3bc09e533cbb6b30a70423557aaec8245cfd842ff0591228b65d32ff5503de07"
	I0813 20:51:01.226916  267666 cri.go:76] found id: ""
	I0813 20:51:01.226950  267666 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p embed-certs-20210813204258-13784 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210813204258-13784
helpers_test.go:236: (dbg) docker inspect embed-certs-20210813204258-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd",
	        "Created": "2021-08-13T20:43:00.697969197Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:44:49.560557171Z",
	            "FinishedAt": "2021-08-13T20:44:46.459813117Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/hosts",
	        "LogPath": "/var/lib/docker/containers/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd-json.log",
	        "Name": "/embed-certs-20210813204258-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210813204258-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210813204258-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b0000c53c6bcad082644405d1c4bc1f83b15bbd7e5690e1b2ea9b0cbb5a663e0-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0000c53c6bcad082644405d1c4bc1f83b15bbd7e5690e1b2ea9b0cbb5a663e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0000c53c6bcad082644405d1c4bc1f83b15bbd7e5690e1b2ea9b0cbb5a663e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0000c53c6bcad082644405d1c4bc1f83b15bbd7e5690e1b2ea9b0cbb5a663e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210813204258-13784",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210813204258-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210813204258-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210813204258-13784",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210813204258-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3579bec25795674e50671452bb5fdf2f9ad46211787f11ba26ce931c9f27e4c8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32940"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32939"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32936"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32938"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32937"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3579bec25795",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210813204258-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d64d0cfaddd9"
	                    ],
	                    "NetworkID": "184061f6a312da3a9376fda691c2f8ca867bd224bc1b115a224d16819cea10a3",
	                    "EndpointID": "fc7514a12638c25b0e407c01484051301dd82edbee873c1e47b3c241b6652e96",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784: exit status 2 (321.842586ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813204258-13784 logs -n 25
helpers_test.go:253: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:44:47 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:17 UTC | Fri, 13 Aug 2021 20:44:49 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:01 UTC | Fri, 13 Aug 2021 20:45:02 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:07 UTC | Fri, 13 Aug 2021 20:45:15 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:02 UTC | Fri, 13 Aug 2021 20:45:23 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:45:23 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:27 UTC | Fri, 13 Aug 2021 20:45:28 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:28 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:15 UTC | Fri, 13 Aug 2021 20:47:32 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:44 UTC | Fri, 13 Aug 2021 20:47:45 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:19 UTC | Fri, 13 Aug 2021 20:49:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:21 UTC | Fri, 13 Aug 2021 20:49:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:50:24 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:24 UTC | Fri, 13 Aug 2021 20:50:25 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:25 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:50:50 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:00 UTC | Fri, 13 Aug 2021 20:51:00 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:50:46
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:50:46.446449  264876 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:50:46.446538  264876 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:50:46.446543  264876 out.go:311] Setting ErrFile to fd 2...
	I0813 20:50:46.446546  264876 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:50:46.446643  264876 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:50:46.446877  264876 out.go:305] Setting JSON to false
	I0813 20:50:46.484317  264876 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5609,"bootTime":1628882237,"procs":327,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:50:46.484400  264876 start.go:121] virtualization: kvm guest
	I0813 20:50:46.486903  264876 out.go:177] * [newest-cni-20210813204926-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:50:46.488161  264876 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:50:46.487043  264876 notify.go:169] Checking for updates...
	I0813 20:50:46.489430  264876 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:50:46.490626  264876 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:50:46.491749  264876 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:50:46.492260  264876 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:50:46.492801  264876 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:50:46.542344  264876 docker.go:132] docker version: linux-19.03.15
	I0813 20:50:46.542449  264876 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:50:46.628465  264876 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:50:46.58185536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:50:46.628551  264876 docker.go:244] overlay module found
	I0813 20:50:46.630641  264876 out.go:177] * Using the docker driver based on existing profile
	I0813 20:50:46.630668  264876 start.go:278] selected driver: docker
	I0813 20:50:46.630675  264876 start.go:751] validating driver "docker" against &{Name:newest-cni-20210813204926-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813204926-13784 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:50:46.630807  264876 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:50:46.630876  264876 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:50:46.630897  264876 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:50:46.632069  264876 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:50:46.632907  264876 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:50:46.714967  264876 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:50:46.669319782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:50:46.715089  264876 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:50:46.715116  264876 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:50:46.716861  264876 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:50:46.716988  264876 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 20:50:46.717016  264876 cni.go:93] Creating CNI manager for ""
	I0813 20:50:46.717025  264876 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:50:46.717039  264876 start_flags.go:277] config:
	{Name:newest-cni-20210813204926-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813204926-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:50:46.718636  264876 out.go:177] * Starting control plane node newest-cni-20210813204926-13784 in cluster newest-cni-20210813204926-13784
	I0813 20:50:46.718689  264876 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:50:46.719830  264876 out.go:177] * Pulling base image ...
	I0813 20:50:46.719897  264876 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:50:46.719939  264876 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:50:46.719943  264876 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:50:46.719960  264876 cache.go:56] Caching tarball of preloaded images
	I0813 20:50:46.720139  264876 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:50:46.720155  264876 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 20:50:46.720305  264876 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/config.json ...
	I0813 20:50:46.804375  264876 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:50:46.804403  264876 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:50:46.804418  264876 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:50:46.804456  264876 start.go:313] acquiring machines lock for newest-cni-20210813204926-13784: {Name:mkbaa3641c39167d13ad9ce12cac12d54427a8c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:50:46.804550  264876 start.go:317] acquired machines lock for "newest-cni-20210813204926-13784" in 71.573µs
	I0813 20:50:46.804570  264876 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:50:46.804576  264876 fix.go:55] fixHost starting: 
	I0813 20:50:46.804827  264876 cli_runner.go:115] Run: docker container inspect newest-cni-20210813204926-13784 --format={{.State.Status}}
	I0813 20:50:46.844464  264876 fix.go:108] recreateIfNeeded on newest-cni-20210813204926-13784: state=Stopped err=<nil>
	W0813 20:50:46.844493  264876 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:50:43.966487  228026 pod_ready.go:102] pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:45.966914  228026 pod_ready.go:102] pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:46.243996  233224 out.go:204]   - Booting up control plane ...
	I0813 20:50:44.881267  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:47.381181  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:47.966944  228026 pod_ready.go:102] pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:48.964023  228026 pod_ready.go:97] error getting pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-ks54v" not found
	I0813 20:50:48.964051  228026 pod_ready.go:81] duration metric: took 14.565864164s waiting for pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace to be "Ready" ...
	E0813 20:50:48.964061  228026 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-ks54v" not found
	I0813 20:50:48.964068  228026 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.967740  228026 pod_ready.go:92] pod "etcd-embed-certs-20210813204258-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:48.967755  228026 pod_ready.go:81] duration metric: took 3.679569ms waiting for pod "etcd-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.967767  228026 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.971119  228026 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813204258-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:48.971132  228026 pod_ready.go:81] duration metric: took 3.359118ms waiting for pod "kube-apiserver-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.971141  228026 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.974362  228026 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813204258-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:48.974377  228026 pod_ready.go:81] duration metric: took 3.230016ms waiting for pod "kube-controller-manager-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.974385  228026 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dwlks" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.978806  228026 pod_ready.go:92] pod "kube-proxy-dwlks" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:48.978823  228026 pod_ready.go:81] duration metric: took 4.431418ms waiting for pod "kube-proxy-dwlks" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.978833  228026 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:49.165009  228026 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813204258-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:49.165030  228026 pod_ready.go:81] duration metric: took 186.189666ms waiting for pod "kube-scheduler-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:49.165038  228026 pod_ready.go:38] duration metric: took 21.790277929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:50:49.165058  228026 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:50:49.165103  228026 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:50:49.188582  228026 api_server.go:70] duration metric: took 21.991032263s to wait for apiserver process to appear ...
	I0813 20:50:49.188603  228026 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:50:49.188612  228026 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0813 20:50:49.192939  228026 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0813 20:50:49.193740  228026 api_server.go:139] control plane version: v1.21.3
	I0813 20:50:49.193759  228026 api_server.go:129] duration metric: took 5.150351ms to wait for apiserver health ...
	I0813 20:50:49.193768  228026 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:50:49.368994  228026 system_pods.go:59] 9 kube-system pods found
	I0813 20:50:49.369033  228026 system_pods.go:61] "coredns-558bd4d5db-gm5pf" [fc581599-2163-40c7-b1a6-87b204c04c68] Running
	I0813 20:50:49.369041  228026 system_pods.go:61] "etcd-embed-certs-20210813204258-13784" [61d5a519-e0c6-470e-9183-00fd88cf38ae] Running
	I0813 20:50:49.369048  228026 system_pods.go:61] "kindnet-q2qfx" [3bf9d110-5126-486b-bb9e-11a2770c7684] Running
	I0813 20:50:49.369057  228026 system_pods.go:61] "kube-apiserver-embed-certs-20210813204258-13784" [3be5591c-8114-4f0f-97db-cb3ae7110d19] Running
	I0813 20:50:49.369064  228026 system_pods.go:61] "kube-controller-manager-embed-certs-20210813204258-13784" [dcdadc50-e0de-45fc-91b7-cfd79be6a078] Running
	I0813 20:50:49.369081  228026 system_pods.go:61] "kube-proxy-dwlks" [8dcb78b6-d02e-4d30-b222-7956334c1ffa] Running
	I0813 20:50:49.369089  228026 system_pods.go:61] "kube-scheduler-embed-certs-20210813204258-13784" [b64f40d0-f782-4351-8118-314c257f87c4] Running
	I0813 20:50:49.369104  228026 system_pods.go:61] "metrics-server-7c784ccb57-gzvs7" [fe8b167a-7be3-4776-9e7e-9bfa688f2f51] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:50:49.369117  228026 system_pods.go:61] "storage-provisioner" [07a9c238-44bd-4ec8-98dd-685f5680530b] Running
	I0813 20:50:49.369130  228026 system_pods.go:74] duration metric: took 175.355386ms to wait for pod list to return data ...
	I0813 20:50:49.369147  228026 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:50:49.565317  228026 default_sa.go:45] found service account: "default"
	I0813 20:50:49.565339  228026 default_sa.go:55] duration metric: took 196.181175ms for default service account to be created ...
	I0813 20:50:49.565348  228026 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:50:49.770117  228026 system_pods.go:86] 9 kube-system pods found
	I0813 20:50:49.770144  228026 system_pods.go:89] "coredns-558bd4d5db-gm5pf" [fc581599-2163-40c7-b1a6-87b204c04c68] Running
	I0813 20:50:49.770149  228026 system_pods.go:89] "etcd-embed-certs-20210813204258-13784" [61d5a519-e0c6-470e-9183-00fd88cf38ae] Running
	I0813 20:50:49.770153  228026 system_pods.go:89] "kindnet-q2qfx" [3bf9d110-5126-486b-bb9e-11a2770c7684] Running
	I0813 20:50:49.770158  228026 system_pods.go:89] "kube-apiserver-embed-certs-20210813204258-13784" [3be5591c-8114-4f0f-97db-cb3ae7110d19] Running
	I0813 20:50:49.770162  228026 system_pods.go:89] "kube-controller-manager-embed-certs-20210813204258-13784" [dcdadc50-e0de-45fc-91b7-cfd79be6a078] Running
	I0813 20:50:49.770166  228026 system_pods.go:89] "kube-proxy-dwlks" [8dcb78b6-d02e-4d30-b222-7956334c1ffa] Running
	I0813 20:50:49.770170  228026 system_pods.go:89] "kube-scheduler-embed-certs-20210813204258-13784" [b64f40d0-f782-4351-8118-314c257f87c4] Running
	I0813 20:50:49.770177  228026 system_pods.go:89] "metrics-server-7c784ccb57-gzvs7" [fe8b167a-7be3-4776-9e7e-9bfa688f2f51] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:50:49.770181  228026 system_pods.go:89] "storage-provisioner" [07a9c238-44bd-4ec8-98dd-685f5680530b] Running
	I0813 20:50:49.770191  228026 system_pods.go:126] duration metric: took 204.837118ms to wait for k8s-apps to be running ...
	I0813 20:50:49.770203  228026 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:50:49.770248  228026 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:50:49.779800  228026 system_svc.go:56] duration metric: took 9.589628ms WaitForService to wait for kubelet.
	I0813 20:50:49.779821  228026 kubeadm.go:547] duration metric: took 22.582276514s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:50:49.779846  228026 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:50:49.966027  228026 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:50:49.966052  228026 node_conditions.go:123] node cpu capacity is 8
	I0813 20:50:49.966066  228026 node_conditions.go:105] duration metric: took 186.214917ms to run NodePressure ...
	I0813 20:50:49.966076  228026 start.go:231] waiting for startup goroutines ...
	I0813 20:50:50.009523  228026 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:50:50.011851  228026 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813204258-13784" cluster and "default" namespace by default
	I0813 20:50:46.846685  264876 out.go:177] * Restarting existing docker container for "newest-cni-20210813204926-13784" ...
	I0813 20:50:46.846761  264876 cli_runner.go:115] Run: docker start newest-cni-20210813204926-13784
	I0813 20:50:48.180153  264876 cli_runner.go:168] Completed: docker start newest-cni-20210813204926-13784: (1.333363318s)
	I0813 20:50:48.180235  264876 cli_runner.go:115] Run: docker container inspect newest-cni-20210813204926-13784 --format={{.State.Status}}
	I0813 20:50:48.223479  264876 kic.go:420] container "newest-cni-20210813204926-13784" state is running.
	I0813 20:50:48.223889  264876 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813204926-13784
	I0813 20:50:48.265787  264876 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/config.json ...
	I0813 20:50:48.265983  264876 machine.go:88] provisioning docker machine ...
	I0813 20:50:48.266011  264876 ubuntu.go:169] provisioning hostname "newest-cni-20210813204926-13784"
	I0813 20:50:48.266061  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:48.310967  264876 main.go:130] libmachine: Using SSH client type: native
	I0813 20:50:48.311181  264876 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0813 20:50:48.311214  264876 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813204926-13784 && echo "newest-cni-20210813204926-13784" | sudo tee /etc/hostname
	I0813 20:50:48.311756  264876 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58560->127.0.0.1:32970: read: connection reset by peer
	I0813 20:50:51.449306  264876 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813204926-13784
	
	I0813 20:50:51.449379  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:51.489414  264876 main.go:130] libmachine: Using SSH client type: native
	I0813 20:50:51.489609  264876 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0813 20:50:51.489631  264876 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813204926-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813204926-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813204926-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:50:51.613227  264876 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:50:51.613256  264876 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:50:51.613297  264876 ubuntu.go:177] setting up certificates
	I0813 20:50:51.613308  264876 provision.go:83] configureAuth start
	I0813 20:50:51.613358  264876 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813204926-13784
	I0813 20:50:51.654619  264876 provision.go:138] copyHostCerts
	I0813 20:50:51.654685  264876 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:50:51.654697  264876 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:50:51.654757  264876 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:50:51.654839  264876 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:50:51.654849  264876 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:50:51.654870  264876 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:50:51.654925  264876 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:50:51.654932  264876 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:50:51.654951  264876 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:50:51.655069  264876 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813204926-13784 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210813204926-13784]
	I0813 20:50:51.807477  264876 provision.go:172] copyRemoteCerts
	I0813 20:50:51.807532  264876 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:50:51.807567  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:51.849378  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:51.936194  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:50:51.953431  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 20:50:51.970331  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:50:51.985704  264876 provision.go:86] duration metric: configureAuth took 372.382117ms
	I0813 20:50:51.985735  264876 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:50:51.985896  264876 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:50:51.986003  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.028196  264876 main.go:130] libmachine: Using SSH client type: native
	I0813 20:50:52.028364  264876 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0813 20:50:52.028383  264876 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:50:52.481983  264876 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:50:52.482016  264876 machine.go:91] provisioned docker machine in 4.216018042s
	I0813 20:50:52.482031  264876 start.go:267] post-start starting for "newest-cni-20210813204926-13784" (driver="docker")
	I0813 20:50:52.482045  264876 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:50:52.482112  264876 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:50:52.482156  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.526617  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:52.620328  264876 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:50:52.622900  264876 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:50:52.622920  264876 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:50:52.622930  264876 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:50:52.622938  264876 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:50:52.622953  264876 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:50:52.623008  264876 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:50:52.623113  264876 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:50:52.623231  264876 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:50:52.629735  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:50:52.645030  264876 start.go:270] post-start completed in 162.979209ms
	I0813 20:50:52.645097  264876 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:50:52.645138  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.692092  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:52.777613  264876 fix.go:57] fixHost completed within 5.97303099s
	I0813 20:50:52.777641  264876 start.go:80] releasing machines lock for "newest-cni-20210813204926-13784", held for 5.973075432s
	I0813 20:50:52.777728  264876 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813204926-13784
	I0813 20:50:52.819833  264876 ssh_runner.go:149] Run: systemctl --version
	I0813 20:50:52.819878  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.819892  264876 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:50:52.819956  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.874165  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:52.879736  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:52.966346  264876 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:50:53.124960  264876 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:50:53.133990  264876 docker.go:153] disabling docker service ...
	I0813 20:50:53.134037  264876 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:50:53.142305  264876 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:50:53.150817  264876 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:50:53.228419  264876 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:50:53.302804  264876 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:50:53.312497  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:50:53.324473  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:50:53.331802  264876 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:50:53.331831  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:50:53.339358  264876 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:50:53.345260  264876 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:50:53.345303  264876 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:50:53.351997  264876 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:50:53.358759  264876 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:50:53.427077  264876 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:50:53.435916  264876 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:50:53.435971  264876 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:50:53.438776  264876 start.go:413] Will wait 60s for crictl version
	I0813 20:50:53.438820  264876 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:50:53.467202  264876 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:50:53.467283  264876 ssh_runner.go:149] Run: crio --version
	I0813 20:50:53.532867  264876 ssh_runner.go:149] Run: crio --version
	I0813 20:50:53.609321  264876 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0813 20:50:53.609391  264876 cli_runner.go:115] Run: docker network inspect newest-cni-20210813204926-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:50:53.649220  264876 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0813 20:50:53.652532  264876 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:50:53.664141  264876 out.go:177]   - kubelet.network-plugin=cni
	I0813 20:50:49.881092  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:51.881399  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:53.881703  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:53.665588  264876 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 20:50:53.665681  264876 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:50:53.665756  264876 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:50:53.699602  264876 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:50:53.699626  264876 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:50:53.699675  264876 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:50:53.724335  264876 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:50:53.724359  264876 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:50:53.724444  264876 ssh_runner.go:149] Run: crio config
	I0813 20:50:53.811924  264876 cni.go:93] Creating CNI manager for ""
	I0813 20:50:53.811946  264876 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:50:53.811961  264876 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 20:50:53.811980  264876 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813204926-13784 NodeName:newest-cni-20210813204926-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:
false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:50:53.812134  264876 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210813204926-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:50:53.812240  264876 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813204926-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813204926-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:50:53.812300  264876 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 20:50:53.819186  264876 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:50:53.819240  264876 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:50:53.825554  264876 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (604 bytes)
	I0813 20:50:53.836945  264876 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 20:50:53.848402  264876 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0813 20:50:53.860427  264876 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:50:53.865189  264876 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:50:53.879272  264876 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784 for IP: 192.168.76.2
	I0813 20:50:53.879327  264876 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:50:53.879348  264876 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:50:53.879409  264876 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/client.key
	I0813 20:50:53.879433  264876 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/apiserver.key.31bdca25
	I0813 20:50:53.879453  264876 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/proxy-client.key
	I0813 20:50:53.879580  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:50:53.879650  264876 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:50:53.879665  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:50:53.879707  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:50:53.879741  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:50:53.879773  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:50:53.879834  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:50:53.881218  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:50:53.899521  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:50:53.915894  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:50:53.931633  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:50:53.947169  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:50:53.963670  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:50:53.981391  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:50:53.997988  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:50:54.014085  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:50:54.030641  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:50:54.046447  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:50:54.062151  264876 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:50:54.075474  264876 ssh_runner.go:149] Run: openssl version
	I0813 20:50:54.080357  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:50:54.087793  264876 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:50:54.090703  264876 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:50:54.090747  264876 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:50:54.095830  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:50:54.102955  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:50:54.110898  264876 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:50:54.113881  264876 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:50:54.113927  264876 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:50:54.118800  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:50:54.124987  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:50:54.131683  264876 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:50:54.134491  264876 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:50:54.134587  264876 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:50:54.138936  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:50:54.145022  264876 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813204926-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813204926-13784 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:50:54.145156  264876 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:50:54.145195  264876 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:50:54.170844  264876 cri.go:76] found id: ""
	I0813 20:50:54.170916  264876 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:50:54.178980  264876 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:50:54.179013  264876 kubeadm.go:600] restartCluster start
	I0813 20:50:54.179057  264876 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:50:54.185858  264876 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:54.187404  264876 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210813204926-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:50:54.188271  264876 kubeconfig.go:128] "newest-cni-20210813204926-13784" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:50:54.189791  264876 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:50:54.193200  264876 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:50:54.199851  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:54.199904  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:54.212679  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:54.413069  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:54.413144  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:54.427381  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:54.613550  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:54.613635  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:54.626416  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:54.813646  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:54.813728  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:54.826912  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.013120  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.013184  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.026175  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.213415  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.213538  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.226473  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.413724  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.413815  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.427168  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.613554  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.613640  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.626804  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.813048  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.813125  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.826688  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.012836  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.012934  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.026593  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.213796  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.213876  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.226975  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.381388  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:57.375979  240241 pod_ready.go:81] duration metric: took 4m0.40024312s waiting for pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace to be "Ready" ...
	E0813 20:50:57.376007  240241 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:50:57.376033  240241 pod_ready.go:38] duration metric: took 4m41.352808191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:50:57.376072  240241 kubeadm.go:604] restartCluster took 4m59.978005133s
	W0813 20:50:57.376213  240241 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:50:57.376247  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 20:50:59.796806  233224 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:00.211538  233224 cni.go:93] Creating CNI manager for ""
	I0813 20:51:00.211560  233224 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:50:56.413440  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.413541  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.426726  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.612916  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.613005  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.626419  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.813721  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.813805  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.827157  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.013383  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:57.013458  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:57.026571  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.213811  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:57.213884  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:57.229656  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.229678  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:57.229721  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:57.257903  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.257931  264876 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:50:57.257940  264876 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:50:57.257953  264876 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:50:57.258022  264876 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:50:57.284765  264876 cri.go:76] found id: ""
	I0813 20:50:57.284833  264876 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:50:57.293907  264876 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:50:57.300372  264876 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 13 20:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 13 20:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug 13 20:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 20:49 /etc/kubernetes/scheduler.conf
	
	I0813 20:50:57.300428  264876 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:50:57.307066  264876 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:50:57.313234  264876 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:50:57.319470  264876 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.319524  264876 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:50:57.326182  264876 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:50:57.332414  264876 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.332457  264876 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:50:57.338423  264876 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:50:57.344511  264876 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:50:57.344529  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:57.393025  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:58.119506  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:58.280484  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:58.360789  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:58.415944  264876 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:50:58.416007  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:50:58.929847  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:50:59.429510  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:50:59.930153  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:00.429376  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:00.930093  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:44:49 UTC, end at Fri 2021-08-13 20:51:03 UTC. --
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.002869712Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=26c3122c-22d0-4228-a462-e9e1f68dff97 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.004644151Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=26c3122c-22d0-4228-a462-e9e1f68dff97 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.006244825Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=1b1c5d3f-505c-4d3d-abbb-0b0ce2f7fe61 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.024436732Z" level=info msg="Removed container fe2b5585c3fa39e26e9d5cf44168846f79530b1c249700777cad8c341b644614: kube-system/coredns-558bd4d5db-ks54v/coredns" id=ad91bb3a-b5d9-4810-bc2d-d35ef7736f59 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.181017020Z" level=info msg="Created container 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=1b1c5d3f-505c-4d3d-abbb-0b0ce2f7fe61 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.181545418Z" level=info msg="Starting container: 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8" id=8d5b0b58-57b4-4190-a83d-9632e922b214 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.206319649Z" level=info msg="Started container 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=8d5b0b58-57b4-4190-a83d-9632e922b214 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:50:40 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:40.004074795Z" level=info msg="Removing container: 1417600f5b638dfa404383feb9553768fa1c9bfda0edf2630436d174c4f61279" id=a346e863-8fb7-4a4b-86c1-079aa3662188 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:50:40 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:40.042096148Z" level=info msg="Removed container 1417600f5b638dfa404383feb9553768fa1c9bfda0edf2630436d174c4f61279: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=a346e863-8fb7-4a4b-86c1-079aa3662188 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:50:41 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:41.846364389Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=555a787d-f484-429c-baa3-fafb1a4556e5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:41 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:41.846604957Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=555a787d-f484-429c-baa3-fafb1a4556e5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:41 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:41.847070834Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=f5013cb4-dbb3-4cee-97a4-931fea72a558 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:50:41 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:41.858943915Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:50:55 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:55.846563931Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=06054aa7-d025-4df3-9290-bbec8d5fee4d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:55 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:55.846837883Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=06054aa7-d025-4df3-9290-bbec8d5fee4d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.845821279Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=7d860a69-c574-4e09-829f-8f70dd0254b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.847766194Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7d860a69-c574-4e09-829f-8f70dd0254b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.848502381Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=2281b9c5-1eb1-459f-bee5-9cf22933b485 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.849998222Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2281b9c5-1eb1-459f-bee5-9cf22933b485 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.850793815Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=bbb4207b-c353-4446-a216-1df009f0c604 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:50:58 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:58.041954692Z" level=info msg="Created container 0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=bbb4207b-c353-4446-a216-1df009f0c604 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:50:58 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:58.042363057Z" level=info msg="Starting container: 0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4" id=45cf925e-2c1b-41f6-a1ea-e070313a9c31 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:50:58 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:58.070046770Z" level=info msg="Started container 0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=45cf925e-2c1b-41f6-a1ea-e070313a9c31 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:50:59 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:59.044868996Z" level=info msg="Removing container: 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8" id=21a8d7c4-e590-4075-92c6-b0be58c80c75 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:50:59 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:59.082650220Z" level=info msg="Removed container 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=21a8d7c4-e590-4075-92c6-b0be58c80c75 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	0dc20b85f9c80       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   5 seconds ago       Exited              dashboard-metrics-scraper   2                   1f46ccae6d64a
	3bc09e533cbb6       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   32 seconds ago      Running             kubernetes-dashboard        0                   a298ce79dba9b
	8bf1e8aedcbf3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   33 seconds ago      Running             storage-provisioner         0                   8a464f7e39ae5
	99a064dab5bd0       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   33 seconds ago      Running             coredns                     0                   6c11ed8d86b44
	49225baf7b8d7       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   35 seconds ago      Running             kube-proxy                  0                   343023a5dac1a
	7f45a57dc9d5c       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   35 seconds ago      Running             kindnet-cni                 0                   a36a19fd30682
	d746a120d4eae       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   57 seconds ago      Running             kube-apiserver              0                   9078fe56df8e2
	fec203c8ef0a0       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   57 seconds ago      Running             kube-controller-manager     0                   fc33ea841e781
	c3a394b8c1f40       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   57 seconds ago      Running             kube-scheduler              0                   0c61a7c654523
	e50f48444db7d       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   57 seconds ago      Running             etcd                        0                   89c09c07c7db9
	
	* 
	* ==> coredns [99a064dab5bd08de84442cabfd73b4db0b7b9b99488cc7cac221ce0f99a85408] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210813204258-13784
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20210813204258-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=embed-certs-20210813204258-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_50_13_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:50:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210813204258-13784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:50:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:50:49 +0000   Fri, 13 Aug 2021 20:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:50:49 +0000   Fri, 13 Aug 2021 20:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:50:49 +0000   Fri, 13 Aug 2021 20:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:50:49 +0000   Fri, 13 Aug 2021 20:50:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20210813204258-13784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                35ecb4c2-d272-49c5-8ada-a920e3507cbd
	  Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-gm5pf                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     36s
	  kube-system                 etcd-embed-certs-20210813204258-13784                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         44s
	  kube-system                 kindnet-q2qfx                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      36s
	  kube-system                 kube-apiserver-embed-certs-20210813204258-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-controller-manager-embed-certs-20210813204258-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-proxy-dwlks                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-scheduler-embed-certs-20210813204258-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 metrics-server-7c784ccb57-gzvs7                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         34s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-cdvnr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-hrz2v                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  61s (x4 over 62s)  kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x4 over 62s)  kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x4 over 62s)  kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 45s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             45s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeNotReady
	  Normal  NodeReady                37s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeReady
	  Normal  Starting                 34s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.863954] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000003] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +1.695863] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000003] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +1.392691] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth825c196e
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b6 08 cd a5 f5 a7 08 06        ..............
	[  +0.344665] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth546386a2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 3a 56 8b 4b bf 71 08 06        ......:V.K.q..
	[  +1.786377] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000002] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +0.213329] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethef57fd78
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff b6 5e bf c9 d0 3e 08 06        .......^...>..
	[  +0.748014] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth1f03081f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 a4 c4 4d 81 45 08 06        .........M.E..
	[  +0.031657] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev vethc9ff12d8
	[  +0.000086] ll header: 00000000: ff ff ff ff ff ff 46 64 65 38 37 ec 08 06        ......Fde87...
	[  +2.087016] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000002] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +3.827412] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000003] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +3.071814] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-184061f6a312
	[  +0.000002] ll header: 00000000: 02 42 3e 2f 9b 3d 02 42 c0 a8 3a 02 08 00        .B>/.=.B..:...
	[  +8.102656] cgroup: cgroup2: unknown option "nsdelegate"
	[  +2.392585] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000002] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	
	* 
	* ==> etcd [e50f48444db7d426c632bb681fc42b2cdeb9d5b5758f99b38f3d97785f7a98fb] <==
	* 2021-08-13 20:50:05.858546 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:50:05.858655 I | embed: listening for peers on 192.168.58.2:2380
	2021-08-13 20:50:05.858741 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 20:50:06 INFO: b2c6679ac05f2cf1 is starting a new election at term 1
	raft2021/08/13 20:50:06 INFO: b2c6679ac05f2cf1 became candidate at term 2
	raft2021/08/13 20:50:06 INFO: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	raft2021/08/13 20:50:06 INFO: b2c6679ac05f2cf1 became leader at term 2
	raft2021/08/13 20:50:06 INFO: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-13 20:50:06.558059 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:50:06.558871 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:50:06.558931 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:50:06.558966 I | embed: ready to serve client requests
	2021-08-13 20:50:06.559067 I | etcdserver: published {Name:embed-certs-20210813204258-13784 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-13 20:50:06.559112 I | embed: ready to serve client requests
	2021-08-13 20:50:06.562031 I | embed: serving client requests on 192.168.58.2:2379
	2021-08-13 20:50:06.562379 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:50:15.300959 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (122.086046ms) to execute
	2021-08-13 20:50:15.301008 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (464.775602ms) to execute
	2021-08-13 20:50:15.301111 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (512.804506ms) to execute
	2021-08-13 20:50:20.278751 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (101.941096ms) to execute
	2021-08-13 20:50:20.278832 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-20210813204258-13784\" " with result "range_response_count:1 size:5726" took too long (168.420509ms) to execute
	2021-08-13 20:50:25.266893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:50:32.818634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:50:42.818911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:50:52.818357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:51:03 up  1:33,  0 users,  load average: 1.75, 2.18, 2.08
	Linux embed-certs-20210813204258-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [d746a120d4eae768e04d4b51465b26b36e4c86d2b8ef44f610ca5355d595a2b4] <==
	* I0813 20:50:11.412451       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 20:50:11.415707       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 20:50:11.415727       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:50:11.826578       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:50:11.872745       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 20:50:11.998250       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0813 20:50:11.998998       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:50:12.002669       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 20:50:12.989623       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:50:13.395310       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:50:13.471881       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 20:50:15.302757       1 trace.go:205] Trace[1824767926]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/tokens-controller,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:50:14.787) (total time: 514ms):
	Trace[1824767926]: ---"About to write a response" 514ms (20:50:00.302)
	Trace[1824767926]: [514.964005ms] [514.964005ms] END
	I0813 20:50:18.816282       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:50:27.061289       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:50:27.061289       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:50:27.211943       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 20:50:32.103106       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 20:50:32.103172       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:50:32.103181       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 20:50:44.885458       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:50:44.885540       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:50:44.885551       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [fec203c8ef0a0fa2c640aead10bd6dfb3c5b18ba85d4372df5349b59b289a3ba] <==
	* I0813 20:50:29.177404       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 20:50:29.362275       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:50:29.572699       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-gzvs7"
	I0813 20:50:29.974064       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 20:50:29.989448       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.059554       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.063382       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 20:50:30.067310       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.068048       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:50:30.072394       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.076831       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:50:30.077274       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.077329       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.080035       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.080082       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.082232       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.082526       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.159452       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.159493       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.167677       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.167751       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:50:30.184177       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-hrz2v"
	I0813 20:50:30.262431       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-cdvnr"
	E0813 20:50:56.529145       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:50:57.080744       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [49225baf7b8d7b1d6a19de833aedc8509f3345bd949135a8f7889e8bbd86ae89] <==
	* I0813 20:50:29.159757       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:50:29.159825       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:50:29.159882       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:50:29.486534       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:50:29.486563       1 server_others.go:212] Using iptables Proxier.
	I0813 20:50:29.486573       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:50:29.486583       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:50:29.486851       1 server.go:643] Version: v1.21.3
	I0813 20:50:29.558797       1 config.go:315] Starting service config controller
	I0813 20:50:29.558833       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:50:29.558859       1 config.go:224] Starting endpoint slice config controller
	I0813 20:50:29.558864       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:50:29.565263       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:50:29.568710       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:50:29.659541       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:50:29.661814       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [c3a394b8c1f401d1467ee22fffd5f729b8b442b8afffcff13e2e2fb2dcff22fc] <==
	* I0813 20:50:10.479958       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:50:10.484973       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:50:10.485077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:50:10.485157       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:50:10.485359       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:50:10.487566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:10.488327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:10.488566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:50:10.488585       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:50:10.488674       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:50:10.488736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:10.488811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:50:10.488828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:50:10.488854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:10.488860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:50:11.380018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:50:11.487053       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:50:11.526252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:50:11.558724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:11.560630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:50:11.595983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:50:11.598828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:50:11.674285       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:11.690651       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0813 20:50:13.581042       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:44:49 UTC, end at Fri 2021-08-13 20:51:03 UTC. --
	Aug 13 20:50:40 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:40.003295    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:40 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:40.003667    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:41.006177    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.006443    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.863462    5530 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.863507    5530 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.863652    5530 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7bgfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-gzvs7_kube-system(fe8b167a-7be3-4776-9e7e-9bfa688f2f51): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.863707    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-gzvs7" podUID=fe8b167a-7be3-4776-9e7e-9bfa688f2f51
	Aug 13 20:50:42 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:42.007793    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:42 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:42.008146    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:50:49 embed-certs-20210813204258-13784 kubelet[5530]: W0813 20:50:49.372820    5530 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:50:49 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:49.379014    5530 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/docker/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:50:55 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:55.847758    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-gzvs7" podUID=fe8b167a-7be3-4776-9e7e-9bfa688f2f51
	Aug 13 20:50:57 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:57.845281    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:59.041065    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:59.041378    5530 scope.go:111] "RemoveContainer" containerID="0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4"
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:59.041779    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: W0813 20:50:59.463752    5530 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:59.472841    5530 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/docker/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:51:00 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:51:00.272447    5530 scope.go:111] "RemoveContainer" containerID="0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4"
	Aug 13 20:51:00 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:51:00.272822    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:51:01 embed-certs-20210813204258-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:51:01 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:51:01.148026    5530 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:51:01 embed-certs-20210813204258-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:51:01 embed-certs-20210813204258-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [3bc09e533cbb6b30a70423557aaec8245cfd842ff0591228b65d32ff5503de07] <==
	* 2021/08/13 20:50:31 Using namespace: kubernetes-dashboard
	2021/08/13 20:50:31 Using in-cluster config to connect to apiserver
	2021/08/13 20:50:31 Using secret token for csrf signing
	2021/08/13 20:50:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:50:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:50:31 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 20:50:31 Generating JWE encryption key
	2021/08/13 20:50:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:50:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:50:32 Initializing JWE encryption key from synchronized object
	2021/08/13 20:50:32 Creating in-cluster Sidecar client
	2021/08/13 20:50:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:50:32 Serving insecurely on HTTP port: 9090
	2021/08/13 20:50:31 Starting overwatch
	
	* 
	* ==> storage-provisioner [8bf1e8aedcbf30a5946a03d983f8d7326ecc43ed49afcf622dda7dde80ce4e32] <==
	* I0813 20:50:30.763415       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:50:30.771188       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:50:30.771233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:50:30.777421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:50:30.777569       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e6ab65a-9af6-4aa9-9811-02273863e98b", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210813204258-13784_0634d70b-e4c9-4e74-b403-51f35f3c09c2 became leader
	I0813 20:50:30.777666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813204258-13784_0634d70b-e4c9-4e74-b403-51f35f3c09c2!
	I0813 20:50:30.878466       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813204258-13784_0634d70b-e4c9-4e74-b403-51f35f3c09c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784: exit status 2 (370.116197ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210813204258-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-gzvs7
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210813204258-13784 describe pod metrics-server-7c784ccb57-gzvs7
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210813204258-13784 describe pod metrics-server-7c784ccb57-gzvs7: exit status 1 (83.741394ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-gzvs7" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210813204258-13784 describe pod metrics-server-7c784ccb57-gzvs7: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210813204258-13784
helpers_test.go:236: (dbg) docker inspect embed-certs-20210813204258-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd",
	        "Created": "2021-08-13T20:43:00.697969197Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:44:49.560557171Z",
	            "FinishedAt": "2021-08-13T20:44:46.459813117Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/hosts",
	        "LogPath": "/var/lib/docker/containers/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd-json.log",
	        "Name": "/embed-certs-20210813204258-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210813204258-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210813204258-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b0000c53c6bcad082644405d1c4bc1f83b15bbd7e5690e1b2ea9b0cbb5a663e0-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0000c53c6bcad082644405d1c4bc1f83b15bbd7e5690e1b2ea9b0cbb5a663e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0000c53c6bcad082644405d1c4bc1f83b15bbd7e5690e1b2ea9b0cbb5a663e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0000c53c6bcad082644405d1c4bc1f83b15bbd7e5690e1b2ea9b0cbb5a663e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210813204258-13784",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210813204258-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210813204258-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210813204258-13784",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210813204258-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3579bec25795674e50671452bb5fdf2f9ad46211787f11ba26ce931c9f27e4c8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32940"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32939"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32936"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32938"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32937"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3579bec25795",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210813204258-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d64d0cfaddd9"
	                    ],
	                    "NetworkID": "184061f6a312da3a9376fda691c2f8ca867bd224bc1b115a224d16819cea10a3",
	                    "EndpointID": "fc7514a12638c25b0e407c01484051301dd82edbee873c1e47b3c241b6652e96",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784: exit status 2 (380.960539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813204258-13784 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20210813204258-13784 logs -n 25: (1.222827485s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:17 UTC | Fri, 13 Aug 2021 20:44:49 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:01 UTC | Fri, 13 Aug 2021 20:45:02 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:07 UTC | Fri, 13 Aug 2021 20:45:15 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:02 UTC | Fri, 13 Aug 2021 20:45:23 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:45:23 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:27 UTC | Fri, 13 Aug 2021 20:45:28 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:28 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:15 UTC | Fri, 13 Aug 2021 20:47:32 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:44 UTC | Fri, 13 Aug 2021 20:47:45 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:19 UTC | Fri, 13 Aug 2021 20:49:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:21 UTC | Fri, 13 Aug 2021 20:49:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:50:24 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:24 UTC | Fri, 13 Aug 2021 20:50:25 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:25 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:50:50 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:00 UTC | Fri, 13 Aug 2021 20:51:00 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:03 UTC | Fri, 13 Aug 2021 20:51:03 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:50:46
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:50:46.446449  264876 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:50:46.446538  264876 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:50:46.446543  264876 out.go:311] Setting ErrFile to fd 2...
	I0813 20:50:46.446546  264876 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:50:46.446643  264876 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:50:46.446877  264876 out.go:305] Setting JSON to false
	I0813 20:50:46.484317  264876 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5609,"bootTime":1628882237,"procs":327,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:50:46.484400  264876 start.go:121] virtualization: kvm guest
	I0813 20:50:46.486903  264876 out.go:177] * [newest-cni-20210813204926-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:50:46.488161  264876 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:50:46.487043  264876 notify.go:169] Checking for updates...
	I0813 20:50:46.489430  264876 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:50:46.490626  264876 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:50:46.491749  264876 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:50:46.492260  264876 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:50:46.492801  264876 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:50:46.542344  264876 docker.go:132] docker version: linux-19.03.15
	I0813 20:50:46.542449  264876 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:50:46.628465  264876 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:50:46.58185536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:50:46.628551  264876 docker.go:244] overlay module found
	I0813 20:50:46.630641  264876 out.go:177] * Using the docker driver based on existing profile
	I0813 20:50:46.630668  264876 start.go:278] selected driver: docker
	I0813 20:50:46.630675  264876 start.go:751] validating driver "docker" against &{Name:newest-cni-20210813204926-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813204926-13784 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:50:46.630807  264876 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:50:46.630876  264876 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:50:46.630897  264876 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:50:46.632069  264876 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:50:46.632907  264876 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:50:46.714967  264876 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:50:46.669319782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 20:50:46.715089  264876 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:50:46.715116  264876 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:50:46.716861  264876 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:50:46.716988  264876 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 20:50:46.717016  264876 cni.go:93] Creating CNI manager for ""
	I0813 20:50:46.717025  264876 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:50:46.717039  264876 start_flags.go:277] config:
	{Name:newest-cni-20210813204926-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813204926-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:50:46.718636  264876 out.go:177] * Starting control plane node newest-cni-20210813204926-13784 in cluster newest-cni-20210813204926-13784
	I0813 20:50:46.718689  264876 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:50:46.719830  264876 out.go:177] * Pulling base image ...
	I0813 20:50:46.719897  264876 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:50:46.719939  264876 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:50:46.719943  264876 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:50:46.719960  264876 cache.go:56] Caching tarball of preloaded images
	I0813 20:50:46.720139  264876 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:50:46.720155  264876 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 20:50:46.720305  264876 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/config.json ...
	I0813 20:50:46.804375  264876 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:50:46.804403  264876 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:50:46.804418  264876 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:50:46.804456  264876 start.go:313] acquiring machines lock for newest-cni-20210813204926-13784: {Name:mkbaa3641c39167d13ad9ce12cac12d54427a8c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:50:46.804550  264876 start.go:317] acquired machines lock for "newest-cni-20210813204926-13784" in 71.573µs
	I0813 20:50:46.804570  264876 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:50:46.804576  264876 fix.go:55] fixHost starting: 
	I0813 20:50:46.804827  264876 cli_runner.go:115] Run: docker container inspect newest-cni-20210813204926-13784 --format={{.State.Status}}
	I0813 20:50:46.844464  264876 fix.go:108] recreateIfNeeded on newest-cni-20210813204926-13784: state=Stopped err=<nil>
	W0813 20:50:46.844493  264876 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:50:43.966487  228026 pod_ready.go:102] pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:45.966914  228026 pod_ready.go:102] pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:46.243996  233224 out.go:204]   - Booting up control plane ...
	I0813 20:50:44.881267  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:47.381181  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:47.966944  228026 pod_ready.go:102] pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:48.964023  228026 pod_ready.go:97] error getting pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-ks54v" not found
	I0813 20:50:48.964051  228026 pod_ready.go:81] duration metric: took 14.565864164s waiting for pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace to be "Ready" ...
	E0813 20:50:48.964061  228026 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-ks54v" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-ks54v" not found
	I0813 20:50:48.964068  228026 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.967740  228026 pod_ready.go:92] pod "etcd-embed-certs-20210813204258-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:48.967755  228026 pod_ready.go:81] duration metric: took 3.679569ms waiting for pod "etcd-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.967767  228026 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.971119  228026 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813204258-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:48.971132  228026 pod_ready.go:81] duration metric: took 3.359118ms waiting for pod "kube-apiserver-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.971141  228026 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.974362  228026 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813204258-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:48.974377  228026 pod_ready.go:81] duration metric: took 3.230016ms waiting for pod "kube-controller-manager-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.974385  228026 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dwlks" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.978806  228026 pod_ready.go:92] pod "kube-proxy-dwlks" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:48.978823  228026 pod_ready.go:81] duration metric: took 4.431418ms waiting for pod "kube-proxy-dwlks" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:48.978833  228026 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:49.165009  228026 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813204258-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:50:49.165030  228026 pod_ready.go:81] duration metric: took 186.189666ms waiting for pod "kube-scheduler-embed-certs-20210813204258-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:50:49.165038  228026 pod_ready.go:38] duration metric: took 21.790277929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:50:49.165058  228026 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:50:49.165103  228026 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:50:49.188582  228026 api_server.go:70] duration metric: took 21.991032263s to wait for apiserver process to appear ...
	I0813 20:50:49.188603  228026 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:50:49.188612  228026 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0813 20:50:49.192939  228026 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0813 20:50:49.193740  228026 api_server.go:139] control plane version: v1.21.3
	I0813 20:50:49.193759  228026 api_server.go:129] duration metric: took 5.150351ms to wait for apiserver health ...
	I0813 20:50:49.193768  228026 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:50:49.368994  228026 system_pods.go:59] 9 kube-system pods found
	I0813 20:50:49.369033  228026 system_pods.go:61] "coredns-558bd4d5db-gm5pf" [fc581599-2163-40c7-b1a6-87b204c04c68] Running
	I0813 20:50:49.369041  228026 system_pods.go:61] "etcd-embed-certs-20210813204258-13784" [61d5a519-e0c6-470e-9183-00fd88cf38ae] Running
	I0813 20:50:49.369048  228026 system_pods.go:61] "kindnet-q2qfx" [3bf9d110-5126-486b-bb9e-11a2770c7684] Running
	I0813 20:50:49.369057  228026 system_pods.go:61] "kube-apiserver-embed-certs-20210813204258-13784" [3be5591c-8114-4f0f-97db-cb3ae7110d19] Running
	I0813 20:50:49.369064  228026 system_pods.go:61] "kube-controller-manager-embed-certs-20210813204258-13784" [dcdadc50-e0de-45fc-91b7-cfd79be6a078] Running
	I0813 20:50:49.369081  228026 system_pods.go:61] "kube-proxy-dwlks" [8dcb78b6-d02e-4d30-b222-7956334c1ffa] Running
	I0813 20:50:49.369089  228026 system_pods.go:61] "kube-scheduler-embed-certs-20210813204258-13784" [b64f40d0-f782-4351-8118-314c257f87c4] Running
	I0813 20:50:49.369104  228026 system_pods.go:61] "metrics-server-7c784ccb57-gzvs7" [fe8b167a-7be3-4776-9e7e-9bfa688f2f51] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:50:49.369117  228026 system_pods.go:61] "storage-provisioner" [07a9c238-44bd-4ec8-98dd-685f5680530b] Running
	I0813 20:50:49.369130  228026 system_pods.go:74] duration metric: took 175.355386ms to wait for pod list to return data ...
	I0813 20:50:49.369147  228026 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:50:49.565317  228026 default_sa.go:45] found service account: "default"
	I0813 20:50:49.565339  228026 default_sa.go:55] duration metric: took 196.181175ms for default service account to be created ...
	I0813 20:50:49.565348  228026 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:50:49.770117  228026 system_pods.go:86] 9 kube-system pods found
	I0813 20:50:49.770144  228026 system_pods.go:89] "coredns-558bd4d5db-gm5pf" [fc581599-2163-40c7-b1a6-87b204c04c68] Running
	I0813 20:50:49.770149  228026 system_pods.go:89] "etcd-embed-certs-20210813204258-13784" [61d5a519-e0c6-470e-9183-00fd88cf38ae] Running
	I0813 20:50:49.770153  228026 system_pods.go:89] "kindnet-q2qfx" [3bf9d110-5126-486b-bb9e-11a2770c7684] Running
	I0813 20:50:49.770158  228026 system_pods.go:89] "kube-apiserver-embed-certs-20210813204258-13784" [3be5591c-8114-4f0f-97db-cb3ae7110d19] Running
	I0813 20:50:49.770162  228026 system_pods.go:89] "kube-controller-manager-embed-certs-20210813204258-13784" [dcdadc50-e0de-45fc-91b7-cfd79be6a078] Running
	I0813 20:50:49.770166  228026 system_pods.go:89] "kube-proxy-dwlks" [8dcb78b6-d02e-4d30-b222-7956334c1ffa] Running
	I0813 20:50:49.770170  228026 system_pods.go:89] "kube-scheduler-embed-certs-20210813204258-13784" [b64f40d0-f782-4351-8118-314c257f87c4] Running
	I0813 20:50:49.770177  228026 system_pods.go:89] "metrics-server-7c784ccb57-gzvs7" [fe8b167a-7be3-4776-9e7e-9bfa688f2f51] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:50:49.770181  228026 system_pods.go:89] "storage-provisioner" [07a9c238-44bd-4ec8-98dd-685f5680530b] Running
	I0813 20:50:49.770191  228026 system_pods.go:126] duration metric: took 204.837118ms to wait for k8s-apps to be running ...
	I0813 20:50:49.770203  228026 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:50:49.770248  228026 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:50:49.779800  228026 system_svc.go:56] duration metric: took 9.589628ms WaitForService to wait for kubelet.
	I0813 20:50:49.779821  228026 kubeadm.go:547] duration metric: took 22.582276514s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:50:49.779846  228026 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:50:49.966027  228026 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:50:49.966052  228026 node_conditions.go:123] node cpu capacity is 8
	I0813 20:50:49.966066  228026 node_conditions.go:105] duration metric: took 186.214917ms to run NodePressure ...
	I0813 20:50:49.966076  228026 start.go:231] waiting for startup goroutines ...
	I0813 20:50:50.009523  228026 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:50:50.011851  228026 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813204258-13784" cluster and "default" namespace by default
	I0813 20:50:46.846685  264876 out.go:177] * Restarting existing docker container for "newest-cni-20210813204926-13784" ...
	I0813 20:50:46.846761  264876 cli_runner.go:115] Run: docker start newest-cni-20210813204926-13784
	I0813 20:50:48.180153  264876 cli_runner.go:168] Completed: docker start newest-cni-20210813204926-13784: (1.333363318s)
	I0813 20:50:48.180235  264876 cli_runner.go:115] Run: docker container inspect newest-cni-20210813204926-13784 --format={{.State.Status}}
	I0813 20:50:48.223479  264876 kic.go:420] container "newest-cni-20210813204926-13784" state is running.
	I0813 20:50:48.223889  264876 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813204926-13784
	I0813 20:50:48.265787  264876 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/config.json ...
	I0813 20:50:48.265983  264876 machine.go:88] provisioning docker machine ...
	I0813 20:50:48.266011  264876 ubuntu.go:169] provisioning hostname "newest-cni-20210813204926-13784"
	I0813 20:50:48.266061  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:48.310967  264876 main.go:130] libmachine: Using SSH client type: native
	I0813 20:50:48.311181  264876 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0813 20:50:48.311214  264876 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813204926-13784 && echo "newest-cni-20210813204926-13784" | sudo tee /etc/hostname
	I0813 20:50:48.311756  264876 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58560->127.0.0.1:32970: read: connection reset by peer
	I0813 20:50:51.449306  264876 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813204926-13784
	
	I0813 20:50:51.449379  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:51.489414  264876 main.go:130] libmachine: Using SSH client type: native
	I0813 20:50:51.489609  264876 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0813 20:50:51.489631  264876 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813204926-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813204926-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813204926-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:50:51.613227  264876 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:50:51.613256  264876 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:50:51.613297  264876 ubuntu.go:177] setting up certificates
	I0813 20:50:51.613308  264876 provision.go:83] configureAuth start
	I0813 20:50:51.613358  264876 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813204926-13784
	I0813 20:50:51.654619  264876 provision.go:138] copyHostCerts
	I0813 20:50:51.654685  264876 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:50:51.654697  264876 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:50:51.654757  264876 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:50:51.654839  264876 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:50:51.654849  264876 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:50:51.654870  264876 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:50:51.654925  264876 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:50:51.654932  264876 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:50:51.654951  264876 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:50:51.655069  264876 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813204926-13784 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210813204926-13784]
	I0813 20:50:51.807477  264876 provision.go:172] copyRemoteCerts
	I0813 20:50:51.807532  264876 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:50:51.807567  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:51.849378  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:51.936194  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:50:51.953431  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 20:50:51.970331  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:50:51.985704  264876 provision.go:86] duration metric: configureAuth took 372.382117ms
	I0813 20:50:51.985735  264876 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:50:51.985896  264876 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:50:51.986003  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.028196  264876 main.go:130] libmachine: Using SSH client type: native
	I0813 20:50:52.028364  264876 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0813 20:50:52.028383  264876 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:50:52.481983  264876 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:50:52.482016  264876 machine.go:91] provisioned docker machine in 4.216018042s
	I0813 20:50:52.482031  264876 start.go:267] post-start starting for "newest-cni-20210813204926-13784" (driver="docker")
	I0813 20:50:52.482045  264876 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:50:52.482112  264876 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:50:52.482156  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.526617  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:52.620328  264876 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:50:52.622900  264876 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:50:52.622920  264876 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:50:52.622930  264876 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:50:52.622938  264876 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:50:52.622953  264876 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:50:52.623008  264876 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:50:52.623113  264876 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:50:52.623231  264876 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:50:52.629735  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:50:52.645030  264876 start.go:270] post-start completed in 162.979209ms
	I0813 20:50:52.645097  264876 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:50:52.645138  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.692092  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:52.777613  264876 fix.go:57] fixHost completed within 5.97303099s
	I0813 20:50:52.777641  264876 start.go:80] releasing machines lock for "newest-cni-20210813204926-13784", held for 5.973075432s
	I0813 20:50:52.777728  264876 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210813204926-13784
	I0813 20:50:52.819833  264876 ssh_runner.go:149] Run: systemctl --version
	I0813 20:50:52.819878  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.819892  264876 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:50:52.819956  264876 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:50:52.874165  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:52.879736  264876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:50:52.966346  264876 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:50:53.124960  264876 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:50:53.133990  264876 docker.go:153] disabling docker service ...
	I0813 20:50:53.134037  264876 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:50:53.142305  264876 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:50:53.150817  264876 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:50:53.228419  264876 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:50:53.302804  264876 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:50:53.312497  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:50:53.324473  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:50:53.331802  264876 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:50:53.331831  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:50:53.339358  264876 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:50:53.345260  264876 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:50:53.345303  264876 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:50:53.351997  264876 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:50:53.358759  264876 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:50:53.427077  264876 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:50:53.435916  264876 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:50:53.435971  264876 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:50:53.438776  264876 start.go:413] Will wait 60s for crictl version
	I0813 20:50:53.438820  264876 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:50:53.467202  264876 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:50:53.467283  264876 ssh_runner.go:149] Run: crio --version
	I0813 20:50:53.532867  264876 ssh_runner.go:149] Run: crio --version
	I0813 20:50:53.609321  264876 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on CRI-O 1.20.3 ...
	I0813 20:50:53.609391  264876 cli_runner.go:115] Run: docker network inspect newest-cni-20210813204926-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:50:53.649220  264876 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0813 20:50:53.652532  264876 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:50:53.664141  264876 out.go:177]   - kubelet.network-plugin=cni
	I0813 20:50:49.881092  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:51.881399  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:53.881703  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:53.665588  264876 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 20:50:53.665681  264876 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:50:53.665756  264876 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:50:53.699602  264876 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:50:53.699626  264876 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:50:53.699675  264876 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:50:53.724335  264876 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:50:53.724359  264876 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:50:53.724444  264876 ssh_runner.go:149] Run: crio config
	I0813 20:50:53.811924  264876 cni.go:93] Creating CNI manager for ""
	I0813 20:50:53.811946  264876 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:50:53.811961  264876 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 20:50:53.811980  264876 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813204926-13784 NodeName:newest-cni-20210813204926-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:
false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:50:53.812134  264876 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "newest-cni-20210813204926-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:50:53.812240  264876 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813204926-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813204926-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:50:53.812300  264876 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 20:50:53.819186  264876 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:50:53.819240  264876 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:50:53.825554  264876 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (604 bytes)
	I0813 20:50:53.836945  264876 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 20:50:53.848402  264876 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0813 20:50:53.860427  264876 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:50:53.865189  264876 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:50:53.879272  264876 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784 for IP: 192.168.76.2
	I0813 20:50:53.879327  264876 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:50:53.879348  264876 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:50:53.879409  264876 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/client.key
	I0813 20:50:53.879433  264876 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/apiserver.key.31bdca25
	I0813 20:50:53.879453  264876 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/proxy-client.key
	I0813 20:50:53.879580  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:50:53.879650  264876 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:50:53.879665  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:50:53.879707  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:50:53.879741  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:50:53.879773  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:50:53.879834  264876 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:50:53.881218  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:50:53.899521  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:50:53.915894  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:50:53.931633  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813204926-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:50:53.947169  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:50:53.963670  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:50:53.981391  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:50:53.997988  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:50:54.014085  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:50:54.030641  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:50:54.046447  264876 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:50:54.062151  264876 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:50:54.075474  264876 ssh_runner.go:149] Run: openssl version
	I0813 20:50:54.080357  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:50:54.087793  264876 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:50:54.090703  264876 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:50:54.090747  264876 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:50:54.095830  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:50:54.102955  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:50:54.110898  264876 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:50:54.113881  264876 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:50:54.113927  264876 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:50:54.118800  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:50:54.124987  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:50:54.131683  264876 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:50:54.134491  264876 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:50:54.134587  264876 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:50:54.138936  264876 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:50:54.145022  264876 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813204926-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813204926-13784 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:50:54.145156  264876 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:50:54.145195  264876 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:50:54.170844  264876 cri.go:76] found id: ""
	I0813 20:50:54.170916  264876 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:50:54.178980  264876 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:50:54.179013  264876 kubeadm.go:600] restartCluster start
	I0813 20:50:54.179057  264876 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:50:54.185858  264876 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:54.187404  264876 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210813204926-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:50:54.188271  264876 kubeconfig.go:128] "newest-cni-20210813204926-13784" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:50:54.189791  264876 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:50:54.193200  264876 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:50:54.199851  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:54.199904  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:54.212679  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:54.413069  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:54.413144  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:54.427381  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:54.613550  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:54.613635  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:54.626416  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:54.813646  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:54.813728  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:54.826912  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.013120  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.013184  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.026175  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.213415  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.213538  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.226473  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.413724  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.413815  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.427168  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.613554  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.613640  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.626804  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:55.813048  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:55.813125  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:55.826688  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.012836  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.012934  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.026593  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.213796  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.213876  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.226975  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.381388  240241 pod_ready.go:102] pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace has status "Ready":"False"
	I0813 20:50:57.375979  240241 pod_ready.go:81] duration metric: took 4m0.40024312s waiting for pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace to be "Ready" ...
	E0813 20:50:57.376007  240241 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-djn9g" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 20:50:57.376033  240241 pod_ready.go:38] duration metric: took 4m41.352808191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:50:57.376072  240241 kubeadm.go:604] restartCluster took 4m59.978005133s
	W0813 20:50:57.376213  240241 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 20:50:57.376247  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0813 20:50:59.796806  233224 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:00.211538  233224 cni.go:93] Creating CNI manager for ""
	I0813 20:51:00.211560  233224 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:50:56.413440  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.413541  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.426726  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.612916  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.613005  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.626419  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:56.813721  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:56.813805  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:56.827157  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.013383  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:57.013458  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:57.026571  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.213811  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:57.213884  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:57.229656  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.229678  264876 api_server.go:164] Checking apiserver status ...
	I0813 20:50:57.229721  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:50:57.257903  264876 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.257931  264876 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 20:50:57.257940  264876 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:50:57.257953  264876 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:50:57.258022  264876 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:50:57.284765  264876 cri.go:76] found id: ""
	I0813 20:50:57.284833  264876 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:50:57.293907  264876 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:50:57.300372  264876 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 13 20:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 13 20:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug 13 20:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 20:49 /etc/kubernetes/scheduler.conf
	
	I0813 20:50:57.300428  264876 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 20:50:57.307066  264876 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 20:50:57.313234  264876 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:50:57.319470  264876 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.319524  264876 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:50:57.326182  264876 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:50:57.332414  264876 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:50:57.332457  264876 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:50:57.338423  264876 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:50:57.344511  264876 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:50:57.344529  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:57.393025  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:58.119506  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:58.280484  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:58.360789  264876 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:50:58.415944  264876 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:50:58.416007  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:50:58.929847  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:50:59.429510  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:50:59.930153  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:00.429376  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:00.930093  264876 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:00.213574  233224 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:00.213642  233224 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:00.217367  233224 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0813 20:51:00.217392  233224 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:00.229623  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:00.384709  233224 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:00.384807  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813204216-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_00_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:00.384827  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:00.489271  233224 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:00.489261  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:01.046490  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:01.546418  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:02.045997  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:02.546602  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:03.046794  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:44:49 UTC, end at Fri 2021-08-13 20:51:05 UTC. --
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.002869712Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=26c3122c-22d0-4228-a462-e9e1f68dff97 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.004644151Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=26c3122c-22d0-4228-a462-e9e1f68dff97 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.006244825Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=1b1c5d3f-505c-4d3d-abbb-0b0ce2f7fe61 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.024436732Z" level=info msg="Removed container fe2b5585c3fa39e26e9d5cf44168846f79530b1c249700777cad8c341b644614: kube-system/coredns-558bd4d5db-ks54v/coredns" id=ad91bb3a-b5d9-4810-bc2d-d35ef7736f59 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.181017020Z" level=info msg="Created container 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=1b1c5d3f-505c-4d3d-abbb-0b0ce2f7fe61 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.181545418Z" level=info msg="Starting container: 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8" id=8d5b0b58-57b4-4190-a83d-9632e922b214 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:50:39 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:39.206319649Z" level=info msg="Started container 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=8d5b0b58-57b4-4190-a83d-9632e922b214 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:50:40 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:40.004074795Z" level=info msg="Removing container: 1417600f5b638dfa404383feb9553768fa1c9bfda0edf2630436d174c4f61279" id=a346e863-8fb7-4a4b-86c1-079aa3662188 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:50:40 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:40.042096148Z" level=info msg="Removed container 1417600f5b638dfa404383feb9553768fa1c9bfda0edf2630436d174c4f61279: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=a346e863-8fb7-4a4b-86c1-079aa3662188 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:50:41 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:41.846364389Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=555a787d-f484-429c-baa3-fafb1a4556e5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:41 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:41.846604957Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=555a787d-f484-429c-baa3-fafb1a4556e5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:41 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:41.847070834Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=f5013cb4-dbb3-4cee-97a4-931fea72a558 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:50:41 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:41.858943915Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:50:55 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:55.846563931Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=06054aa7-d025-4df3-9290-bbec8d5fee4d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:55 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:55.846837883Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=06054aa7-d025-4df3-9290-bbec8d5fee4d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.845821279Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=7d860a69-c574-4e09-829f-8f70dd0254b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.847766194Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7d860a69-c574-4e09-829f-8f70dd0254b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.848502381Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=2281b9c5-1eb1-459f-bee5-9cf22933b485 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.849998222Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2281b9c5-1eb1-459f-bee5-9cf22933b485 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:50:57 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:57.850793815Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=bbb4207b-c353-4446-a216-1df009f0c604 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:50:58 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:58.041954692Z" level=info msg="Created container 0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=bbb4207b-c353-4446-a216-1df009f0c604 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:50:58 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:58.042363057Z" level=info msg="Starting container: 0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4" id=45cf925e-2c1b-41f6-a1ea-e070313a9c31 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:50:58 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:58.070046770Z" level=info msg="Started container 0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=45cf925e-2c1b-41f6-a1ea-e070313a9c31 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:50:59 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:59.044868996Z" level=info msg="Removing container: 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8" id=21a8d7c4-e590-4075-92c6-b0be58c80c75 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:50:59 embed-certs-20210813204258-13784 crio[243]: time="2021-08-13 20:50:59.082650220Z" level=info msg="Removed container 6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr/dashboard-metrics-scraper" id=21a8d7c4-e590-4075-92c6-b0be58c80c75 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	0dc20b85f9c80       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   7 seconds ago       Exited              dashboard-metrics-scraper   2                   1f46ccae6d64a
	3bc09e533cbb6       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   33 seconds ago      Running             kubernetes-dashboard        0                   a298ce79dba9b
	8bf1e8aedcbf3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   35 seconds ago      Running             storage-provisioner         0                   8a464f7e39ae5
	99a064dab5bd0       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   35 seconds ago      Running             coredns                     0                   6c11ed8d86b44
	49225baf7b8d7       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   37 seconds ago      Running             kube-proxy                  0                   343023a5dac1a
	7f45a57dc9d5c       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   37 seconds ago      Running             kindnet-cni                 0                   a36a19fd30682
	d746a120d4eae       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   59 seconds ago      Running             kube-apiserver              0                   9078fe56df8e2
	fec203c8ef0a0       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   59 seconds ago      Running             kube-controller-manager     0                   fc33ea841e781
	c3a394b8c1f40       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   59 seconds ago      Running             kube-scheduler              0                   0c61a7c654523
	e50f48444db7d       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   59 seconds ago      Running             etcd                        0                   89c09c07c7db9
	
	* 
	* ==> coredns [99a064dab5bd08de84442cabfd73b4db0b7b9b99488cc7cac221ce0f99a85408] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210813204258-13784
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20210813204258-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=embed-certs-20210813204258-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_50_13_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:50:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210813204258-13784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:50:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:50:49 +0000   Fri, 13 Aug 2021 20:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:50:49 +0000   Fri, 13 Aug 2021 20:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:50:49 +0000   Fri, 13 Aug 2021 20:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:50:49 +0000   Fri, 13 Aug 2021 20:50:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20210813204258-13784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                35ecb4c2-d272-49c5-8ada-a920e3507cbd
	  Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-gm5pf                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38s
	  kube-system                 etcd-embed-certs-20210813204258-13784                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         46s
	  kube-system                 kindnet-q2qfx                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      38s
	  kube-system                 kube-apiserver-embed-certs-20210813204258-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-controller-manager-embed-certs-20210813204258-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kube-proxy-dwlks                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-scheduler-embed-certs-20210813204258-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 metrics-server-7c784ccb57-gzvs7                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         36s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-cdvnr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-hrz2v                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  63s (x4 over 64s)  kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x4 over 64s)  kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x4 over 64s)  kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 47s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             47s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeNotReady
	  Normal  NodeReady                39s                kubelet     Node embed-certs-20210813204258-13784 status is now: NodeReady
	  Normal  Starting                 36s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.863954] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000003] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +1.695863] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000003] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +1.392691] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth825c196e
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b6 08 cd a5 f5 a7 08 06        ..............
	[  +0.344665] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth546386a2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 3a 56 8b 4b bf 71 08 06        ......:V.K.q..
	[  +1.786377] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000002] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +0.213329] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethef57fd78
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff b6 5e bf c9 d0 3e 08 06        .......^...>..
	[  +0.748014] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth1f03081f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 a4 c4 4d 81 45 08 06        .........M.E..
	[  +0.031657] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev vethc9ff12d8
	[  +0.000086] ll header: 00000000: ff ff ff ff ff ff 46 64 65 38 37 ec 08 06        ......Fde87...
	[  +2.087016] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000002] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +3.827412] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000003] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +3.071814] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-184061f6a312
	[  +0.000002] ll header: 00000000: 02 42 3e 2f 9b 3d 02 42 c0 a8 3a 02 08 00        .B>/.=.B..:...
	[  +8.102656] cgroup: cgroup2: unknown option "nsdelegate"
	[  +2.392585] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000002] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	
	* 
	* ==> etcd [e50f48444db7d426c632bb681fc42b2cdeb9d5b5758f99b38f3d97785f7a98fb] <==
	* 2021-08-13 20:50:05.858546 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:50:05.858655 I | embed: listening for peers on 192.168.58.2:2380
	2021-08-13 20:50:05.858741 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 20:50:06 INFO: b2c6679ac05f2cf1 is starting a new election at term 1
	raft2021/08/13 20:50:06 INFO: b2c6679ac05f2cf1 became candidate at term 2
	raft2021/08/13 20:50:06 INFO: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	raft2021/08/13 20:50:06 INFO: b2c6679ac05f2cf1 became leader at term 2
	raft2021/08/13 20:50:06 INFO: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-13 20:50:06.558059 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:50:06.558871 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:50:06.558931 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:50:06.558966 I | embed: ready to serve client requests
	2021-08-13 20:50:06.559067 I | etcdserver: published {Name:embed-certs-20210813204258-13784 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-13 20:50:06.559112 I | embed: ready to serve client requests
	2021-08-13 20:50:06.562031 I | embed: serving client requests on 192.168.58.2:2379
	2021-08-13 20:50:06.562379 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:50:15.300959 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (122.086046ms) to execute
	2021-08-13 20:50:15.301008 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (464.775602ms) to execute
	2021-08-13 20:50:15.301111 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (512.804506ms) to execute
	2021-08-13 20:50:20.278751 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (101.941096ms) to execute
	2021-08-13 20:50:20.278832 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-20210813204258-13784\" " with result "range_response_count:1 size:5726" took too long (168.420509ms) to execute
	2021-08-13 20:50:25.266893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:50:32.818634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:50:42.818911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:50:52.818357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:51:05 up  1:33,  0 users,  load average: 1.75, 2.18, 2.08
	Linux embed-certs-20210813204258-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [d746a120d4eae768e04d4b51465b26b36e4c86d2b8ef44f610ca5355d595a2b4] <==
	* I0813 20:50:11.412451       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 20:50:11.415707       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 20:50:11.415727       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:50:11.826578       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:50:11.872745       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 20:50:11.998250       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0813 20:50:11.998998       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:50:12.002669       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 20:50:12.989623       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:50:13.395310       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:50:13.471881       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 20:50:15.302757       1 trace.go:205] Trace[1824767926]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/tokens-controller,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:50:14.787) (total time: 514ms):
	Trace[1824767926]: ---"About to write a response" 514ms (20:50:00.302)
	Trace[1824767926]: [514.964005ms] [514.964005ms] END
	I0813 20:50:18.816282       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:50:27.061289       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:50:27.061289       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:50:27.211943       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 20:50:32.103106       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 20:50:32.103172       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:50:32.103181       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 20:50:44.885458       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:50:44.885540       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:50:44.885551       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [fec203c8ef0a0fa2c640aead10bd6dfb3c5b18ba85d4372df5349b59b289a3ba] <==
	* I0813 20:50:29.177404       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 20:50:29.362275       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:50:29.572699       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-gzvs7"
	I0813 20:50:29.974064       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 20:50:29.989448       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.059554       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.063382       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 20:50:30.067310       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.068048       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:50:30.072394       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.076831       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:50:30.077274       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.077329       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.080035       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.080082       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.082232       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.082526       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.159452       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.159493       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:50:30.167677       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:50:30.167751       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:50:30.184177       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-hrz2v"
	I0813 20:50:30.262431       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-cdvnr"
	E0813 20:50:56.529145       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:50:57.080744       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [49225baf7b8d7b1d6a19de833aedc8509f3345bd949135a8f7889e8bbd86ae89] <==
	* I0813 20:50:29.159757       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0813 20:50:29.159825       1 server_others.go:140] Detected node IP 192.168.58.2
	W0813 20:50:29.159882       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:50:29.486534       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:50:29.486563       1 server_others.go:212] Using iptables Proxier.
	I0813 20:50:29.486573       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:50:29.486583       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:50:29.486851       1 server.go:643] Version: v1.21.3
	I0813 20:50:29.558797       1 config.go:315] Starting service config controller
	I0813 20:50:29.558833       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:50:29.558859       1 config.go:224] Starting endpoint slice config controller
	I0813 20:50:29.558864       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:50:29.565263       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:50:29.568710       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:50:29.659541       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:50:29.661814       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [c3a394b8c1f401d1467ee22fffd5f729b8b442b8afffcff13e2e2fb2dcff22fc] <==
	* I0813 20:50:10.479958       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:50:10.484973       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:50:10.485077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:50:10.485157       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:50:10.485359       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:50:10.487566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:10.488327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:10.488566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:50:10.488585       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:50:10.488674       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:50:10.488736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:10.488811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:50:10.488828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:50:10.488854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:10.488860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:50:11.380018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:50:11.487053       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:50:11.526252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:50:11.558724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:11.560630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:50:11.595983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:50:11.598828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:50:11.674285       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:11.690651       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0813 20:50:13.581042       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:44:49 UTC, end at Fri 2021-08-13 20:51:06 UTC. --
	Aug 13 20:50:40 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:40.003295    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:40 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:40.003667    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:41.006177    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.006443    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.863462    5530 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.863507    5530 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.863652    5530 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7bgfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-gzvs7_kube-system(fe8b167a-7be3-4776-9e7e-9bfa688f2f51): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 13 20:50:41 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:41.863707    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-gzvs7" podUID=fe8b167a-7be3-4776-9e7e-9bfa688f2f51
	Aug 13 20:50:42 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:42.007793    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:42 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:42.008146    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:50:49 embed-certs-20210813204258-13784 kubelet[5530]: W0813 20:50:49.372820    5530 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:50:49 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:49.379014    5530 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/docker/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:50:55 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:55.847758    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-gzvs7" podUID=fe8b167a-7be3-4776-9e7e-9bfa688f2f51
	Aug 13 20:50:57 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:57.845281    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:59.041065    5530 scope.go:111] "RemoveContainer" containerID="6d6b6bc49627b8ffc5e93478786b6910f41b033e2ea56d57c521ec1c19e5f3e8"
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:50:59.041378    5530 scope.go:111] "RemoveContainer" containerID="0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4"
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:59.041779    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: W0813 20:50:59.463752    5530 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:50:59 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:50:59.472841    5530 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd/docker/d64d0cfaddd9382c6384bc95b5bac51f808e65e3684b4a3e4d4d9d9f3ab605cd\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:51:00 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:51:00.272447    5530 scope.go:111] "RemoveContainer" containerID="0dc20b85f9c8033ce6da08d721053a94bb4fda8ee0cd14ac83e0aa399535aec4"
	Aug 13 20:51:00 embed-certs-20210813204258-13784 kubelet[5530]: E0813 20:51:00.272822    5530 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-cdvnr_kubernetes-dashboard(4d54eecf-f562-4271-9e2a-c01c594b974e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-cdvnr" podUID=4d54eecf-f562-4271-9e2a-c01c594b974e
	Aug 13 20:51:01 embed-certs-20210813204258-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:51:01 embed-certs-20210813204258-13784 kubelet[5530]: I0813 20:51:01.148026    5530 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:51:01 embed-certs-20210813204258-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:51:01 embed-certs-20210813204258-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [3bc09e533cbb6b30a70423557aaec8245cfd842ff0591228b65d32ff5503de07] <==
	* 2021/08/13 20:50:31 Using namespace: kubernetes-dashboard
	2021/08/13 20:50:31 Using in-cluster config to connect to apiserver
	2021/08/13 20:50:31 Using secret token for csrf signing
	2021/08/13 20:50:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:50:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:50:31 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 20:50:31 Generating JWE encryption key
	2021/08/13 20:50:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:50:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:50:32 Initializing JWE encryption key from synchronized object
	2021/08/13 20:50:32 Creating in-cluster Sidecar client
	2021/08/13 20:50:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:50:32 Serving insecurely on HTTP port: 9090
	2021/08/13 20:50:31 Starting overwatch
	
	* 
	* ==> storage-provisioner [8bf1e8aedcbf30a5946a03d983f8d7326ecc43ed49afcf622dda7dde80ce4e32] <==
	* I0813 20:50:30.763415       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:50:30.771188       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:50:30.771233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:50:30.777421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:50:30.777569       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e6ab65a-9af6-4aa9-9811-02273863e98b", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210813204258-13784_0634d70b-e4c9-4e74-b403-51f35f3c09c2 became leader
	I0813 20:50:30.777666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813204258-13784_0634d70b-e4c9-4e74-b403-51f35f3c09c2!
	I0813 20:50:30.878466       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813204258-13784_0634d70b-e4c9-4e74-b403-51f35f3c09c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784: exit status 2 (361.650999ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210813204258-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-gzvs7
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210813204258-13784 describe pod metrics-server-7c784ccb57-gzvs7
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210813204258-13784 describe pod metrics-server-7c784ccb57-gzvs7: exit status 1 (75.124754ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-gzvs7" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210813204258-13784 describe pod metrics-server-7c784ccb57-gzvs7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (116.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210813204926-13784 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20210813204926-13784 --alsologtostderr -v=1: exit status 80 (2.420698425s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210813204926-13784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:51:13.457827  272325 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:51:13.458070  272325 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:13.458083  272325 out.go:311] Setting ErrFile to fd 2...
	I0813 20:51:13.458089  272325 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:13.458245  272325 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:51:13.458491  272325 out.go:305] Setting JSON to false
	I0813 20:51:13.458518  272325 mustload.go:65] Loading cluster: newest-cni-20210813204926-13784
	I0813 20:51:13.458990  272325 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:13.459479  272325 cli_runner.go:115] Run: docker container inspect newest-cni-20210813204926-13784 --format={{.State.Status}}
	I0813 20:51:13.529467  272325 host.go:66] Checking if "newest-cni-20210813204926-13784" exists ...
	I0813 20:51:13.530462  272325 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210813204926-13784 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:51:13.533755  272325 out.go:177] * Pausing node newest-cni-20210813204926-13784 ... 
	I0813 20:51:13.533800  272325 host.go:66] Checking if "newest-cni-20210813204926-13784" exists ...
	I0813 20:51:13.534156  272325 ssh_runner.go:149] Run: systemctl --version
	I0813 20:51:13.534212  272325 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210813204926-13784
	I0813 20:51:13.598477  272325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813204926-13784/id_rsa Username:docker}
	I0813 20:51:13.713602  272325 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:13.736654  272325 pause.go:50] kubelet running: true
	I0813 20:51:13.736712  272325 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:51:13.944279  272325 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:51:13.944377  272325 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:51:14.047550  272325 cri.go:76] found id: "f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6"
	I0813 20:51:14.047580  272325 cri.go:76] found id: "242db25cc9ec38e4dfbb4bc1b78267e847c6525b3d52669618f415e5a43d959d"
	I0813 20:51:14.047588  272325 cri.go:76] found id: "d62beeb118a98c801163373b044340940086f4ec752c59bd5004b24226ad7e4f"
	I0813 20:51:14.047594  272325 cri.go:76] found id: "5cf3304e920ea2ce118869d3ede6bac32c48015da75c58f1888d62ec7c002759"
	I0813 20:51:14.047600  272325 cri.go:76] found id: "6d9e66e7c07b8532bae3df4efaa7ce0d7585e57894154a5bc5ad692e5c2b97e4"
	I0813 20:51:14.047607  272325 cri.go:76] found id: "3b151ac588863ccbbe66e0bd5c7ec4eb98e7de2368b68a042cec308fb9fcad5c"
	I0813 20:51:14.047612  272325 cri.go:76] found id: "f73f74e7523c018e7399db61b0c77d3dbce4b56844ab358ee6d86318a7f3adb6"
	I0813 20:51:14.047618  272325 cri.go:76] found id: "6254822604de291d71c4f3126bc028a40c85d5466257614648d572ffe55992aa"
	I0813 20:51:14.047624  272325 cri.go:76] found id: ""
	I0813 20:51:14.047669  272325 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p newest-cni-20210813204926-13784 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210813204926-13784
helpers_test.go:236: (dbg) docker inspect newest-cni-20210813204926-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c",
	        "Created": "2021-08-13T20:49:28.185191409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:50:48.172911106Z",
	            "FinishedAt": "2021-08-13T20:50:45.694194258Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/hosts",
	        "LogPath": "/var/lib/docker/containers/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c-json.log",
	        "Name": "/newest-cni-20210813204926-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20210813204926-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210813204926-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d6e90d5f3317bc571e7208ee3a331ff18f68bc986253bcaa4672092565d6eb97-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d6e90d5f3317bc571e7208ee3a331ff18f68bc986253bcaa4672092565d6eb97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d6e90d5f3317bc571e7208ee3a331ff18f68bc986253bcaa4672092565d6eb97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d6e90d5f3317bc571e7208ee3a331ff18f68bc986253bcaa4672092565d6eb97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210813204926-13784",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210813204926-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210813204926-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210813204926-13784",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210813204926-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b2f0e421433ec3c346a1a9b8df726e79a9d19e6bb2601e6dbbe4194b1edd3127",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b2f0e421433e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210813204926-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b6f86c7573af"
	                    ],
	                    "NetworkID": "5952937ba827e1b9acf33cb56d9de999cc3a2580fd857a98f092a890ee878345",
	                    "EndpointID": "4e32b45c66c804f51d961c18491fdcfa3a91e97001f8f05e7a7d1cc0f86bd4f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813204926-13784 -n newest-cni-20210813204926-13784
E0813 20:51:16.135765   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813204926-13784 -n newest-cni-20210813204926-13784: exit status 2 (14.583675124s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:51:30.406269  274158 status.go:422] Error apiserver status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210813204926-13784 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210813204926-13784 logs -n 25: exit status 110 (23.165614539s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:28 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:15 UTC | Fri, 13 Aug 2021 20:47:32 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:44 UTC | Fri, 13 Aug 2021 20:47:45 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:19 UTC | Fri, 13 Aug 2021 20:49:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:21 UTC | Fri, 13 Aug 2021 20:49:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:50:24 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:24 UTC | Fri, 13 Aug 2021 20:50:25 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:25 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:50:50 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:00 UTC | Fri, 13 Aug 2021 20:51:00 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:03 UTC | Fri, 13 Aug 2021 20:51:03 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:05 UTC | Fri, 13 Aug 2021 20:51:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:06 UTC | Fri, 13 Aug 2021 20:51:10 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:11 UTC | Fri, 13 Aug 2021 20:51:11 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:51:12 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:13 UTC | Fri, 13 Aug 2021 20:51:13 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:51:22 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:51:11
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:51:11.626877  271328 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:51:11.627052  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627060  271328 out.go:311] Setting ErrFile to fd 2...
	I0813 20:51:11.627064  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627159  271328 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:51:11.627409  271328 out.go:305] Setting JSON to false
	I0813 20:51:11.666661  271328 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5634,"bootTime":1628882237,"procs":328,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:51:11.666785  271328 start.go:121] virtualization: kvm guest
	I0813 20:51:11.669469  271328 out.go:177] * [auto-20210813204009-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:51:11.669645  271328 notify.go:169] Checking for updates...
	I0813 20:51:11.671319  271328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:11.672833  271328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:51:11.674351  271328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:51:11.675913  271328 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:51:11.676594  271328 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:11.676833  271328 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.676967  271328 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.677023  271328 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:51:11.731497  271328 docker.go:132] docker version: linux-19.03.15
	I0813 20:51:11.731582  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.824730  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.775305956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.824827  271328 docker.go:244] overlay module found
	I0813 20:51:11.826307  271328 out.go:177] * Using the docker driver based on user configuration
	I0813 20:51:11.826332  271328 start.go:278] selected driver: docker
	I0813 20:51:11.826337  271328 start.go:751] validating driver "docker" against <nil>
	I0813 20:51:11.826355  271328 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:51:11.826409  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:51:11.826435  271328 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:51:11.827724  271328 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:51:11.828584  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.921127  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.870452453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.921281  271328 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:51:11.921463  271328 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:51:11.921497  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:11.921506  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:11.921514  271328 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:51:11.921523  271328 start_flags.go:277] config:
	{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:11.924012  271328 out.go:177] * Starting control plane node auto-20210813204009-13784 in cluster auto-20210813204009-13784
	I0813 20:51:11.924056  271328 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:51:11.925270  271328 out.go:177] * Pulling base image ...
	I0813 20:51:11.925296  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:11.925327  271328 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:51:11.925325  271328 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:51:11.925373  271328 cache.go:56] Caching tarball of preloaded images
	I0813 20:51:11.925616  271328 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:51:11.925640  271328 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:51:11.925773  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:11.925807  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json: {Name:mk3876305492e8ad5450e3976660c9fa1c973e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.029343  271328 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:51:12.029375  271328 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:51:12.029391  271328 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:51:12.029434  271328 start.go:313] acquiring machines lock for auto-20210813204009-13784: {Name:mkd0aba803bc7694302f970fb956ac46569643dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:51:12.029622  271328 start.go:317] acquired machines lock for "auto-20210813204009-13784" in 163.616µs
	I0813 20:51:12.029653  271328 start.go:89] Provisioning new machine with config: &{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:12.029748  271328 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:51:11.473988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:11.474018  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:11.573472  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:11.573526  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:11.658988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.659019  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:11.685635  264876 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.988027  264876 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813204926-13784"
	I0813 20:51:12.521134  264876 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:12.521160  264876 addons.go:344] enableAddons completed in 2.029586792s
	I0813 20:51:12.583342  264876 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:12.585304  264876 out.go:177] 
	W0813 20:51:12.585562  264876 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:12.587605  264876 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:12.589196  264876 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813204926-13784" cluster and "default" namespace by default
	I0813 20:51:08.546768  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.046384  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.546599  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.046701  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.546641  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.046329  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.546622  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.046214  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.546737  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.666694  233224 kubeadm.go:985] duration metric: took 12.281927379s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:12.666726  233224 kubeadm.go:392] StartCluster complete in 5m41.350158589s
	I0813 20:51:12.666746  233224 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.666841  233224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:12.669323  233224 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:13.198236  233224 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813204216-13784" rescaled to 1
	I0813 20:51:13.198297  233224 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:51:13.198331  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:13.200510  233224 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:13.198427  233224 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:13.200649  233224 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200666  233224 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200671  233224 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:13.200686  233224 addons.go:59] Setting dashboard=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.198561  233224 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:13.200707  233224 addons.go:135] Setting addon dashboard=true in "no-preload-20210813204216-13784"
	I0813 20:51:13.200710  233224 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813204216-13784"
	W0813 20:51:13.200714  233224 addons.go:147] addon dashboard should already be in state true
	I0813 20:51:13.200722  233224 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200733  233224 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:13.200743  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200748  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200588  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:13.200700  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200713  233224 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200905  233224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813204216-13784"
	I0813 20:51:13.201200  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201286  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201320  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201369  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.268820  233224 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.268850  233224 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:13.268885  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.269529  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.272105  233224 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.272280  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:13.276915  233224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:13.275633  233224 node_ready.go:49] node "no-preload-20210813204216-13784" has status "Ready":"True"
	I0813 20:51:13.277012  233224 node_ready.go:38] duration metric: took 4.87652ms waiting for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.277035  233224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:13.277050  233224 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.277062  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:13.277114  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.280067  233224 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.282273  233224 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:13.282349  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:13.282360  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:13.282428  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.288178  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:13.302483  233224 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.302581  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:13.302600  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:13.302672  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.364847  233224 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.364873  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:13.364933  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.394311  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.422036  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.432725  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.457704  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.517628  233224 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 20:51:13.528393  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.620168  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:13.620195  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:13.671071  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.681321  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:13.681356  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:13.689240  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:13.689265  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:13.774865  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:13.774905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:13.862937  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:13.862968  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:13.866582  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:13.866605  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:13.965927  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:13.965951  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:13.986024  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:14.070287  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:14.070319  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:14.189473  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:14.189565  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:14.364541  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:14.364569  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:14.492877  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:14.492905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:14.596170  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:14.596202  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:14.663166  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.134726824s)
	I0813 20:51:14.669029  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:15.296512  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.310389487s)
	I0813 20:51:15.296557  233224 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813204216-13784"
	I0813 20:51:15.375159  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.190525  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.521448806s)
	I0813 20:51:12.032028  271328 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:51:12.032292  271328 start.go:160] libmachine.API.Create for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:12.032325  271328 client.go:168] LocalClient.Create starting
	I0813 20:51:12.032388  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:51:12.032418  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032440  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032571  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:51:12.032593  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032613  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032954  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:51:12.084329  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:51:12.084421  271328 network_create.go:255] running [docker network inspect auto-20210813204009-13784] to gather additional debugging logs...
	I0813 20:51:12.084441  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784
	W0813 20:51:12.129703  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 returned with exit code 1
	I0813 20:51:12.129740  271328 network_create.go:258] error running [docker network inspect auto-20210813204009-13784]: docker network inspect auto-20210813204009-13784: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210813204009-13784
	I0813 20:51:12.129756  271328 network_create.go:260] output of [docker network inspect auto-20210813204009-13784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210813204009-13784
	
	** /stderr **
	I0813 20:51:12.129811  271328 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:12.181560  271328 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-e58530d1cbfd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c3:d4:16:b0}}
	I0813 20:51:12.182554  271328 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0003f6078] misses:0}
	I0813 20:51:12.182616  271328 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:51:12.182634  271328 network_create.go:106] attempt to create docker network auto-20210813204009-13784 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:51:12.182698  271328 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210813204009-13784
	I0813 20:51:12.265555  271328 network_create.go:90] docker network auto-20210813204009-13784 192.168.58.0/24 created
	I0813 20:51:12.265592  271328 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20210813204009-13784" container
	I0813 20:51:12.265659  271328 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:51:12.325195  271328 cli_runner.go:115] Run: docker volume create auto-20210813204009-13784 --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:51:12.375214  271328 oci.go:102] Successfully created a docker volume auto-20210813204009-13784
	I0813 20:51:12.375313  271328 cli_runner.go:115] Run: docker run --rm --name auto-20210813204009-13784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --entrypoint /usr/bin/test -v auto-20210813204009-13784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:51:13.255475  271328 oci.go:106] Successfully prepared a docker volume auto-20210813204009-13784
	W0813 20:51:13.255535  271328 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:51:13.255544  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:51:13.255605  271328 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:51:13.255907  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:13.255936  271328 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:51:13.256015  271328 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:51:13.443619  271328 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210813204009-13784 --name auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210813204009-13784 --network auto-20210813204009-13784 --ip 192.168.58.2 --volume auto-20210813204009-13784:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:51:14.118301  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Running}}
	I0813 20:51:14.185140  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:14.236626  271328 cli_runner.go:115] Run: docker exec auto-20210813204009-13784 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:51:14.394377  271328 oci.go:278] the created container "auto-20210813204009-13784" has a running status.
	I0813 20:51:14.394412  271328 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa...
	I0813 20:51:14.559698  271328 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:51:14.962022  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:15.017995  271328 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:51:15.018017  271328 kic_runner.go:115] Args: [docker exec --privileged auto-20210813204009-13784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:51:16.192846  233224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:16.192890  233224 addons.go:344] enableAddons completed in 2.994475177s
	I0813 20:51:17.804083  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.801657  271328 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.545595555s)
	I0813 20:51:17.801693  271328 kic.go:188] duration metric: took 4.545754 seconds to extract preloaded images to volume
	I0813 20:51:17.801770  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:17.842060  271328 machine.go:88] provisioning docker machine ...
	I0813 20:51:17.842103  271328 ubuntu.go:169] provisioning hostname "auto-20210813204009-13784"
	I0813 20:51:17.842167  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:17.880732  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:17.880934  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:17.880952  271328 main.go:130] libmachine: About to run SSH command:
	sudo hostname auto-20210813204009-13784 && echo "auto-20210813204009-13784" | sudo tee /etc/hostname
	I0813 20:51:18.049279  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: auto-20210813204009-13784
	
	I0813 20:51:18.049355  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.089070  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.089215  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.089233  271328 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210813204009-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210813204009-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210813204009-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:51:18.214361  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:51:18.214400  271328 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:51:18.214423  271328 ubuntu.go:177] setting up certificates
	I0813 20:51:18.214435  271328 provision.go:83] configureAuth start
	I0813 20:51:18.214499  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:18.257160  271328 provision.go:138] copyHostCerts
	I0813 20:51:18.257225  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:51:18.257232  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:51:18.257274  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:51:18.257345  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:51:18.257355  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:51:18.257373  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:51:18.257422  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:51:18.257430  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:51:18.257445  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:51:18.257520  271328 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.auto-20210813204009-13784 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210813204009-13784]
	I0813 20:51:18.405685  271328 provision.go:172] copyRemoteCerts
	I0813 20:51:18.405745  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:51:18.405785  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.445891  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:18.536412  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:51:18.553289  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0813 20:51:18.568793  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:51:18.583774  271328 provision.go:86] duration metric: configureAuth took 369.326679ms
	I0813 20:51:18.583798  271328 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:51:18.583946  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:18.584072  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.627524  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.627677  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.627697  271328 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:51:19.012135  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:51:19.012167  271328 machine.go:91] provisioned docker machine in 1.170081385s
	I0813 20:51:19.012178  271328 client.go:171] LocalClient.Create took 6.979844019s
	I0813 20:51:19.012195  271328 start.go:168] duration metric: libmachine.API.Create for "auto-20210813204009-13784" took 6.979905282s
	I0813 20:51:19.012204  271328 start.go:267] post-start starting for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:19.012215  271328 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:51:19.012274  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:51:19.012321  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.051463  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.148765  271328 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:51:19.151322  271328 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:51:19.151341  271328 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:51:19.151349  271328 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:51:19.151355  271328 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:51:19.151364  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:51:19.151409  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:51:19.151507  271328 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:51:19.151607  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:51:19.158200  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:19.176073  271328 start.go:270] post-start completed in 163.849198ms
	I0813 20:51:19.176519  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.224022  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:19.224268  271328 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:51:19.224328  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.265461  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.357703  271328 start.go:129] duration metric: createHost completed in 7.327939716s
	I0813 20:51:19.357731  271328 start.go:80] releasing machines lock for "auto-20210813204009-13784", held for 7.328093299s
	I0813 20:51:19.357829  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.403591  271328 ssh_runner.go:149] Run: systemctl --version
	I0813 20:51:19.403631  271328 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:51:19.403663  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.403725  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.454924  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.455089  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.690299  271328 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:51:19.711263  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:51:19.720449  271328 docker.go:153] disabling docker service ...
	I0813 20:51:19.720510  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:51:19.729566  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:51:19.738541  271328 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:51:19.809055  271328 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:51:19.878138  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:51:19.887210  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:51:19.901071  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.909825  271328 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:51:19.909855  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.918547  271328 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:51:19.925341  271328 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:51:19.925401  271328 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:51:19.932883  271328 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:51:19.939083  271328 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:51:20.008572  271328 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:51:20.019341  271328 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:51:20.019407  271328 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:51:20.022897  271328 start.go:413] Will wait 60s for crictl version
	I0813 20:51:20.022952  271328 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:51:20.049207  271328 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:51:20.049276  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.118062  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.185186  271328 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:51:20.185268  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:20.231193  271328 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:51:20.234527  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.243481  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:20.243537  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.298894  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.298920  271328 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:51:20.298967  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.326049  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.326070  271328 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:51:20.326138  271328 ssh_runner.go:149] Run: crio config
	I0813 20:51:20.405222  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:20.405254  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:20.405269  271328 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:51:20.405286  271328 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210813204009-13784 NodeName:auto-20210813204009-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:51:20.405450  271328 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "auto-20210813204009-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:51:20.406210  271328 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=auto-20210813204009-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:51:20.406291  271328 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:51:20.414073  271328 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:51:20.414143  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:51:20.420611  271328 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (556 bytes)
	I0813 20:51:20.432233  271328 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:51:20.443622  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0813 20:51:20.454650  271328 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:51:20.457221  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.467941  271328 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784 for IP: 192.168.58.2
	I0813 20:51:20.467993  271328 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:51:20.468013  271328 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:51:20.468073  271328 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key
	I0813 20:51:20.468084  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt with IP's: []
	I0813 20:51:20.834054  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt ...
	I0813 20:51:20.834092  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: {Name:mk7fec601fb1fafe5c23646db0e11a54596e8bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834267  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key ...
	I0813 20:51:20.834281  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key: {Name:mk1cae1776891d9f945556a388916d00049fb0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834361  271328 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041
	I0813 20:51:20.834373  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:51:21.063423  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 ...
	I0813 20:51:21.063459  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041: {Name:mk251c4f0d507b09ef6d31c1707428420ec85197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065611  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 ...
	I0813 20:51:21.065633  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041: {Name:mk4d38dae507bc9d1c850061ba3bdb1c6e2ca7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065723  271328 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt
	I0813 20:51:21.065806  271328 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key
	I0813 20:51:21.065871  271328 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key
	I0813 20:51:21.065883  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt with IP's: []
	I0813 20:51:21.152453  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt ...
	I0813 20:51:21.152481  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt: {Name:mke5a626b5b050e50bb47e400c3bba4f5fb88778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152637  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key ...
	I0813 20:51:21.152650  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key: {Name:mkb2a71eb086a15771297e8ab11e852569412fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152807  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:51:21.152843  271328 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:51:21.152855  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:51:21.152880  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:51:21.152909  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:51:21.152931  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:51:21.152971  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:21.153904  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:51:21.171484  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:51:21.187960  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:51:21.205911  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:51:21.223614  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:51:21.239905  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:51:21.255368  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:51:21.271028  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:51:21.286769  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:51:21.302428  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:51:21.317590  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:51:21.336580  271328 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:51:21.355880  271328 ssh_runner.go:149] Run: openssl version
	I0813 20:51:21.361210  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:51:21.368318  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371245  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371283  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.376426  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:51:21.384634  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:51:21.392048  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395072  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395113  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.400410  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:51:21.408727  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:51:21.415718  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418881  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418923  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.423802  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:51:21.431770  271328 kubeadm.go:390] StartCluster: {Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:21.431861  271328 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:51:21.431914  271328 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:21.455876  271328 cri.go:76] found id: ""
	I0813 20:51:21.455927  271328 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:51:21.463196  271328 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:21.471334  271328 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:21.471384  271328 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:21.478565  271328 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:21.478610  271328 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:18.862764  233224 pod_ready.go:92] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.862797  233224 pod_ready.go:81] duration metric: took 5.574582513s waiting for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.862817  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867642  233224 pod_ready.go:92] pod "coredns-78fcd69978-kbf57" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.867658  233224 pod_ready.go:81] duration metric: took 4.833167ms waiting for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867668  233224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:20.879817  233224 pod_ready.go:102] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:21.378531  233224 pod_ready.go:92] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.378554  233224 pod_ready.go:81] duration metric: took 2.510878118s waiting for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.378572  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382866  233224 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.382882  233224 pod_ready.go:81] duration metric: took 4.296091ms waiting for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382892  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386782  233224 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.386801  233224 pod_ready.go:81] duration metric: took 3.90189ms waiting for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386813  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390480  233224 pod_ready.go:92] pod "kube-proxy-vf22v" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.390494  233224 pod_ready.go:81] duration metric: took 3.672888ms waiting for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390501  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604404  233224 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.604433  233224 pod_ready.go:81] duration metric: took 213.923321ms waiting for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604445  233224 pod_ready.go:38] duration metric: took 8.327391702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:21.604469  233224 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:51:21.604523  233224 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:21.685434  233224 api_server.go:70] duration metric: took 8.487094951s to wait for apiserver process to appear ...
	I0813 20:51:21.685459  233224 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:51:21.685471  233224 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:51:21.691084  233224 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:51:21.691907  233224 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:51:21.691929  233224 api_server.go:129] duration metric: took 6.463677ms to wait for apiserver health ...
	I0813 20:51:21.691939  233224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:51:21.806833  233224 system_pods.go:59] 10 kube-system pods found
	I0813 20:51:21.806865  233224 system_pods.go:61] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:21.806872  233224 system_pods.go:61] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:21.806878  233224 system_pods.go:61] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:21.806884  233224 system_pods.go:61] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:21.806890  233224 system_pods.go:61] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:21.806897  233224 system_pods.go:61] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:21.806903  233224 system_pods.go:61] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:21.806909  233224 system_pods.go:61] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:21.806921  233224 system_pods.go:61] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:21.806947  233224 system_pods.go:61] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:21.806955  233224 system_pods.go:74] duration metric: took 115.009603ms to wait for pod list to return data ...
	I0813 20:51:21.806968  233224 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:51:22.003355  233224 default_sa.go:45] found service account: "default"
	I0813 20:51:22.003384  233224 default_sa.go:55] duration metric: took 196.403211ms for default service account to be created ...
	I0813 20:51:22.003397  233224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:51:22.206326  233224 system_pods.go:86] 10 kube-system pods found
	I0813 20:51:22.206359  233224 system_pods.go:89] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:22.206368  233224 system_pods.go:89] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:22.206376  233224 system_pods.go:89] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:22.206382  233224 system_pods.go:89] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:22.206390  233224 system_pods.go:89] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:22.206398  233224 system_pods.go:89] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:22.206407  233224 system_pods.go:89] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:22.206414  233224 system_pods.go:89] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:22.206428  233224 system_pods.go:89] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:22.206438  233224 system_pods.go:89] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:22.206451  233224 system_pods.go:126] duration metric: took 203.046705ms to wait for k8s-apps to be running ...
	I0813 20:51:22.206463  233224 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:51:22.206511  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:22.263444  233224 system_svc.go:56] duration metric: took 56.96766ms WaitForService to wait for kubelet.
	I0813 20:51:22.263482  233224 kubeadm.go:547] duration metric: took 9.065148102s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:51:22.263519  233224 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:51:22.403039  233224 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:51:22.403065  233224 node_conditions.go:123] node cpu capacity is 8
	I0813 20:51:22.403081  233224 node_conditions.go:105] duration metric: took 139.554694ms to run NodePressure ...
	I0813 20:51:22.403096  233224 start.go:231] waiting for startup goroutines ...
	I0813 20:51:22.450275  233224 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:22.455408  233224 out.go:177] 
	W0813 20:51:22.455568  233224 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:22.462541  233224 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:22.464230  233224 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813204216-13784" cluster and "default" namespace by default
	I0813 20:51:21.794120  271328 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:25.163675  271328 out.go:204]   - Booting up control plane ...
	I0813 20:51:28.722579  240241 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.346300289s)
	I0813 20:51:28.722667  240241 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:28.732254  240241 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:28.732318  240241 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:28.757337  240241 cri.go:76] found id: ""
	I0813 20:51:28.757392  240241 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:28.764551  240241 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:28.764599  240241 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:28.771196  240241 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:28.771247  240241 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:29.067432  240241 out.go:204]   - Generating certificates and keys ...
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:50:48 UTC, end at Fri 2021-08-13 20:51:30 UTC. --
	Aug 13 20:51:11 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:11.873644530Z" level=info msg="Starting container: 242db25cc9ec38e4dfbb4bc1b78267e847c6525b3d52669618f415e5a43d959d" id=3d8a4b98-8c45-458a-b6b7-1ccea9073a35 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:11 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:11.957641710Z" level=info msg="Started container 242db25cc9ec38e4dfbb4bc1b78267e847c6525b3d52669618f415e5a43d959d: kube-system/coredns-78fcd69978-xd22x/coredns" id=3d8a4b98-8c45-458a-b6b7-1ccea9073a35 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:11 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:11.958453573Z" level=info msg="Removed container e3e1b2da2019db53ed308a9e9ce6ef81713aa2a7286fd53705936a2e77521f74: kube-system/storage-provisioner/storage-provisioner" id=8716b6b2-c737-4d10-8bac-e195c1d04226 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:51:12 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:12.070764762Z" level=info msg="Created container f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6: kube-system/kube-proxy-hkksk/kube-proxy" id=d980e2bd-3665-4f9d-9c8d-314584e8cb63 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:12 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:12.071418788Z" level=info msg="Starting container: f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6" id=9e5a6373-083f-46cd-8120-796059e9c0b5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:12 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:12.088181242Z" level=info msg="Started container f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6: kube-system/kube-proxy-hkksk/kube-proxy" id=9e5a6373-083f-46cd-8120-796059e9c0b5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.368284816Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.377884119Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.385915709Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.395374049Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.415399454Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.415442697Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.415470266Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.430482733Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.434385349Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.437713652Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.456889442Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.456941187Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.456984919Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.457065485Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.462800794Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.465718476Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.469603372Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.501282308Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.501320397Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	f659084bf3913       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c   18 seconds ago      Running             kube-proxy                1                   cc0735a93051f
	242db25cc9ec3       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44   19 seconds ago      Running             coredns                   0                   49a26db5d2c2d
	d62beeb118a98       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   19 seconds ago      Running             kindnet-cni               1                   0a37ef49aecac
	5cf3304e920ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   19 seconds ago      Exited              storage-provisioner       2                   86deab179c448
	6d9e66e7c07b8       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75   26 seconds ago      Running             kube-scheduler            1                   fa9c255323766
	3b151ac588863       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c   26 seconds ago      Running             kube-controller-manager   1                   773e4be2f87a8
	f73f74e7523c0       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a   26 seconds ago      Running             kube-apiserver            1                   708b20952a2df
	6254822604de2       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba   26 seconds ago      Running             etcd                      1                   638a6b90a9594
	
	* 
	* ==> coredns [242db25cc9ec38e4dfbb4bc1b78267e847c6525b3d52669618f415e5a43d959d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +0.220015] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +0.427996] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +0.611919] IPv4: martian source 10.244.0.4 from 10.96.0.1, on dev br-e58530d1cbfd
	[  +0.000002] ll header: 00000000: 02 42 c3 d4 16 b0 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +0.251982] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +0.003996] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000002] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +1.723868] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +3.651787] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +2.811832] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000019] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +0.204198] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +3.895437] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[ +12.031205] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000003] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +1.787836] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	
	* 
	* ==> etcd [6254822604de291d71c4f3126bc028a40c85d5466257614648d572ffe55992aa] <==
	* {"level":"info","ts":"2021-08-13T20:51:04.571Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-13T20:51:04.573Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.0","cluster-id":"6f20f2c4b2fb5f8a","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:04.573Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-13T20:51:04.574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2021-08-13T20:51:04.574Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2021-08-13T20:51:04.574Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:04.575Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-13T20:51:04.576Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-13T20:51:04.576Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-13T20:51:04.576Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2021-08-13T20:51:04.576Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20210813204926-13784 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T20:51:04.869Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2021-08-13T20:51:04.869Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:51:53 up  1:34,  0 users,  load average: 3.34, 2.57, 2.22
	Linux newest-cni-20210813204926-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f73f74e7523c018e7399db61b0c77d3dbce4b56844ab358ee6d86318a7f3adb6] <==
	* I0813 20:51:09.430830       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:51:09.440111       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:51:10.143459       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	W0813 20:51:10.234409       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 20:51:10.234498       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:51:10.234506       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 20:51:10.286162       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:51:10.306347       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 20:51:10.392759       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:51:10.411336       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 20:51:12.270407       1 controller.go:611] quota admission added evaluator for: namespaces
	E0813 20:51:52.506241       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc012a29380)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0813 20:51:52.506570       1 trace.go:205] Trace[364753056]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:34c1a80e-d95a-48f6-8cc5-e481f8e1dd47,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:51:19.679) (total time: 32826ms):
	Trace[364753056]: [32.826598785s] [32.826598785s] END
	I0813 20:51:53.267700       1 trace.go:205] Trace[1524671974]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 20:51:23.378) (total time: 29888ms):
	Trace[1524671974]: [29.888809162s] [29.888809162s] END
	I0813 20:51:53.267701       1 trace.go:205] Trace[1414905352]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (13-Aug-2021 20:51:30.993) (total time: 22273ms):
	Trace[1414905352]: [22.273815362s] [22.273815362s] END
	E0813 20:51:53.267743       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc012629140)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	E0813 20:51:53.267744       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc012a29c20)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0813 20:51:53.268062       1 trace.go:205] Trace[1744628617]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:27fddf3c-d589-47d1-ade8-1cc81f8861a4,client:192.168.76.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:51:23.378) (total time: 29889ms):
	Trace[1744628617]: [29.889194945s] [29.889194945s] END
	I0813 20:51:53.269092       1 trace.go:205] Trace[1483036330]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:f128caae-da48-4638-90ff-02ba0609db20,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:30.993) (total time: 22275ms):
	Trace[1483036330]: [22.275247521s] [22.275247521s] END
	
	* 
	* ==> kube-controller-manager [3b151ac588863ccbbe66e0bd5c7ec4eb98e7de2368b68a042cec308fb9fcad5c] <==
	* I0813 20:51:12.528150       1 controller.go:170] Starting ephemeral volume controller
	I0813 20:51:12.528160       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I0813 20:51:12.578791       1 controllermanager.go:577] Started "endpoint"
	I0813 20:51:12.578864       1 endpoints_controller.go:195] Starting endpoint controller
	I0813 20:51:12.578872       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0813 20:51:12.628601       1 controllermanager.go:577] Started "replicationcontroller"
	I0813 20:51:12.628683       1 replica_set.go:186] Starting replicationcontroller controller
	I0813 20:51:12.628691       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0813 20:51:12.683043       1 controllermanager.go:577] Started "deployment"
	I0813 20:51:12.683113       1 deployment_controller.go:153] "Starting controller" controller="deployment"
	I0813 20:51:12.683249       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0813 20:51:12.728206       1 controllermanager.go:577] Started "csrcleaner"
	I0813 20:51:12.728248       1 cleaner.go:82] Starting CSR cleaner controller
	I0813 20:51:12.784526       1 controllermanager.go:577] Started "tokencleaner"
	I0813 20:51:12.784624       1 tokencleaner.go:118] Starting token cleaner controller
	I0813 20:51:12.784638       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0813 20:51:12.784648       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0813 20:51:12.828713       1 controllermanager.go:577] Started "persistentvolume-binder"
	I0813 20:51:12.828774       1 pv_controller_base.go:308] Starting persistent volume controller
	I0813 20:51:12.828782       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	E0813 20:51:12.988049       1 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0813 20:51:12.988173       1 controllermanager.go:577] Started "namespace"
	I0813 20:51:12.988213       1 namespace_controller.go:200] Starting namespace controller
	I0813 20:51:12.988243       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0813 20:51:13.028272       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-proxy [f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6] <==
	* I0813 20:51:12.188589       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0813 20:51:12.188651       1 server_others.go:140] Detected node IP 192.168.76.2
	W0813 20:51:12.188674       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:51:12.312301       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:51:12.312448       1 server_others.go:212] Using iptables Proxier.
	I0813 20:51:12.312468       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:51:12.312517       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:51:12.312915       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:51:12.316264       1 config.go:315] Starting service config controller
	I0813 20:51:12.316302       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:51:12.316585       1 config.go:224] Starting endpoint slice config controller
	I0813 20:51:12.316593       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 20:51:12.375412       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210813204926-13784.169af8e3c030d9c1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4b012b0530e, ext:224480708, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210813204926-13784", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"newest-cni-20210813204926-13784", UID:"newest-cni-20210813204926-13784", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210813204926-13784.169af8e3c030d9c1" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:51:12.416783       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:51:12.466435       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [6d9e66e7c07b8532bae3df4efaa7ce0d7585e57894154a5bc5ad692e5c2b97e4] <==
	* W0813 20:51:04.767365       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0813 20:51:05.872122       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:51:08.470107       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:51:08.470340       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:51:08.470499       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:51:08.470553       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:51:08.483494       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 20:51:08.483568       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:08.483814       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0813 20:51:08.483896       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0813 20:51:08.584599       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:50:48 UTC, end at Fri 2021-08-13 20:51:53 UTC. --
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529592     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5973d85-0779-4522-be43-0672048bb97e-lib-modules\") pod \"kube-proxy-hkksk\" (UID: \"e5973d85-0779-4522-be43-0672048bb97e\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529634     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ksc2\" (UniqueName: \"kubernetes.io/projected/e5973d85-0779-4522-be43-0672048bb97e-kube-api-access-2ksc2\") pod \"kube-proxy-hkksk\" (UID: \"e5973d85-0779-4522-be43-0672048bb97e\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529661     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr7kc\" (UniqueName: \"kubernetes.io/projected/747a5773-8b7e-4a2a-bc63-f583e8d66484-kube-api-access-jr7kc\") pod \"storage-provisioner\" (UID: \"747a5773-8b7e-4a2a-bc63-f583e8d66484\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529688     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5973d85-0779-4522-be43-0672048bb97e-xtables-lock\") pod \"kube-proxy-hkksk\" (UID: \"e5973d85-0779-4522-be43-0672048bb97e\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529718     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2329f1a-00ca-47a4-8c4c-303a94a4252b-lib-modules\") pod \"kindnet-kj9c8\" (UID: \"b2329f1a-00ca-47a4-8c4c-303a94a4252b\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529755     811 reconciler.go:157] "Reconciler: start to sync state"
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:10.665514     811 request.go:665] Waited for 1.034182122s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:10.681841     811 scope.go:110] "RemoveContainer" containerID="e3e1b2da2019db53ed308a9e9ce6ef81713aa2a7286fd53705936a2e77521f74"
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:10.964718     811 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:10.964801     811 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:10.965341     811 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ss5pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{
Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-w86ts_kube-system(0aa37bf6-f816-4d14-a768-96ed42e5eaff): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:10.965432     811 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-w86ts" podUID=0aa37bf6-f816-4d14-a768-96ed42e5eaff
	Aug 13 20:51:11 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:11.763763     811 scope.go:110] "RemoveContainer" containerID="e3e1b2da2019db53ed308a9e9ce6ef81713aa2a7286fd53705936a2e77521f74"
	Aug 13 20:51:11 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:11.764145     811 scope.go:110] "RemoveContainer" containerID="5cf3304e920ea2ce118869d3ede6bac32c48015da75c58f1888d62ec7c002759"
	Aug 13 20:51:11 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:11.764406     811 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(747a5773-8b7e-4a2a-bc63-f583e8d66484)\"" pod="kube-system/storage-provisioner" podUID=747a5773-8b7e-4a2a-bc63-f583e8d66484
	Aug 13 20:51:11 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:11.767888     811 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-w86ts" podUID=0aa37bf6-f816-4d14-a768-96ed42e5eaff
	Aug 13 20:51:12 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:12.773983     811 scope.go:110] "RemoveContainer" containerID="5cf3304e920ea2ce118869d3ede6bac32c48015da75c58f1888d62ec7c002759"
	Aug 13 20:51:12 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:12.774328     811 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(747a5773-8b7e-4a2a-bc63-f583e8d66484)\"" pod="kube-system/storage-provisioner" podUID=747a5773-8b7e-4a2a-bc63-f583e8d66484
	Aug 13 20:51:13 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:13.602447     811 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9debee84-4a17-4c6f-9b60-5c63cfd30411 path="/var/lib/kubelet/pods/9debee84-4a17-4c6f-9b60-5c63cfd30411/volumes"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:13.616145     811 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/docker/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c\": RecentStats: unable to find data in memory cache], [\"/docker\": RecentStats: unable to find data in memory cache], [\"/docker/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/docker\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:13.785581     811 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:13.933294     811 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:51:13 newest-cni-20210813204926-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:51:13 newest-cni-20210813204926-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [5cf3304e920ea2ce118869d3ede6bac32c48015da75c58f1888d62ec7c002759] <==
	* I0813 20:51:11.369892       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0813 20:51:11.373764       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:51:53.358104  276761 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210813204926-13784
helpers_test.go:236: (dbg) docker inspect newest-cni-20210813204926-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c",
	        "Created": "2021-08-13T20:49:28.185191409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:50:48.172911106Z",
	            "FinishedAt": "2021-08-13T20:50:45.694194258Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/hosts",
	        "LogPath": "/var/lib/docker/containers/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c-json.log",
	        "Name": "/newest-cni-20210813204926-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20210813204926-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210813204926-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d6e90d5f3317bc571e7208ee3a331ff18f68bc986253bcaa4672092565d6eb97-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d6e90d5f3317bc571e7208ee3a331ff18f68bc986253bcaa4672092565d6eb97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d6e90d5f3317bc571e7208ee3a331ff18f68bc986253bcaa4672092565d6eb97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d6e90d5f3317bc571e7208ee3a331ff18f68bc986253bcaa4672092565d6eb97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210813204926-13784",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210813204926-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210813204926-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210813204926-13784",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210813204926-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b2f0e421433ec3c346a1a9b8df726e79a9d19e6bb2601e6dbbe4194b1edd3127",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b2f0e421433e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210813204926-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b6f86c7573af"
	                    ],
	                    "NetworkID": "5952937ba827e1b9acf33cb56d9de999cc3a2580fd857a98f092a890ee878345",
	                    "EndpointID": "4e32b45c66c804f51d961c18491fdcfa3a91e97001f8f05e7a7d1cc0f86bd4f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813204926-13784 -n newest-cni-20210813204926-13784

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813204926-13784 -n newest-cni-20210813204926-13784: exit status 2 (15.781080599s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:52:09.428896  279383 status.go:422] Error apiserver status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210813204926-13784 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210813204926-13784 logs -n 25: exit status 110 (1m0.824690232s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:15 UTC | Fri, 13 Aug 2021 20:47:32 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:44 UTC | Fri, 13 Aug 2021 20:47:45 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:19 UTC | Fri, 13 Aug 2021 20:49:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:21 UTC | Fri, 13 Aug 2021 20:49:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:50:24 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:24 UTC | Fri, 13 Aug 2021 20:50:25 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:25 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:50:50 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:00 UTC | Fri, 13 Aug 2021 20:51:00 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:03 UTC | Fri, 13 Aug 2021 20:51:03 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:05 UTC | Fri, 13 Aug 2021 20:51:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:06 UTC | Fri, 13 Aug 2021 20:51:10 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:11 UTC | Fri, 13 Aug 2021 20:51:11 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:51:12 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:13 UTC | Fri, 13 Aug 2021 20:51:13 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:51:22 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:41 UTC | Fri, 13 Aug 2021 20:51:41 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:52:08 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:51:11
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:51:11.626877  271328 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:51:11.627052  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627060  271328 out.go:311] Setting ErrFile to fd 2...
	I0813 20:51:11.627064  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627159  271328 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:51:11.627409  271328 out.go:305] Setting JSON to false
	I0813 20:51:11.666661  271328 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5634,"bootTime":1628882237,"procs":328,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:51:11.666785  271328 start.go:121] virtualization: kvm guest
	I0813 20:51:11.669469  271328 out.go:177] * [auto-20210813204009-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:51:11.669645  271328 notify.go:169] Checking for updates...
	I0813 20:51:11.671319  271328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:11.672833  271328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:51:11.674351  271328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:51:11.675913  271328 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:51:11.676594  271328 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:11.676833  271328 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.676967  271328 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.677023  271328 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:51:11.731497  271328 docker.go:132] docker version: linux-19.03.15
	I0813 20:51:11.731582  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.824730  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.775305956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.824827  271328 docker.go:244] overlay module found
	I0813 20:51:11.826307  271328 out.go:177] * Using the docker driver based on user configuration
	I0813 20:51:11.826332  271328 start.go:278] selected driver: docker
	I0813 20:51:11.826337  271328 start.go:751] validating driver "docker" against <nil>
	I0813 20:51:11.826355  271328 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:51:11.826409  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:51:11.826435  271328 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:51:11.827724  271328 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:51:11.828584  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.921127  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.870452453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.921281  271328 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:51:11.921463  271328 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:51:11.921497  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:11.921506  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:11.921514  271328 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:51:11.921523  271328 start_flags.go:277] config:
	{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:11.924012  271328 out.go:177] * Starting control plane node auto-20210813204009-13784 in cluster auto-20210813204009-13784
	I0813 20:51:11.924056  271328 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:51:11.925270  271328 out.go:177] * Pulling base image ...
	I0813 20:51:11.925296  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:11.925327  271328 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:51:11.925325  271328 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:51:11.925373  271328 cache.go:56] Caching tarball of preloaded images
	I0813 20:51:11.925616  271328 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:51:11.925640  271328 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:51:11.925773  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:11.925807  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json: {Name:mk3876305492e8ad5450e3976660c9fa1c973e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.029343  271328 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:51:12.029375  271328 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:51:12.029391  271328 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:51:12.029434  271328 start.go:313] acquiring machines lock for auto-20210813204009-13784: {Name:mkd0aba803bc7694302f970fb956ac46569643dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:51:12.029622  271328 start.go:317] acquired machines lock for "auto-20210813204009-13784" in 163.616µs
	I0813 20:51:12.029653  271328 start.go:89] Provisioning new machine with config: &{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:12.029748  271328 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:51:11.473988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:11.474018  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:11.573472  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:11.573526  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:11.658988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.659019  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:11.685635  264876 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.988027  264876 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813204926-13784"
	I0813 20:51:12.521134  264876 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:12.521160  264876 addons.go:344] enableAddons completed in 2.029586792s
	I0813 20:51:12.583342  264876 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:12.585304  264876 out.go:177] 
	W0813 20:51:12.585562  264876 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:12.587605  264876 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:12.589196  264876 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813204926-13784" cluster and "default" namespace by default
	I0813 20:51:08.546768  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.046384  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.546599  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.046701  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.546641  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.046329  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.546622  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.046214  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.546737  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.666694  233224 kubeadm.go:985] duration metric: took 12.281927379s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:12.666726  233224 kubeadm.go:392] StartCluster complete in 5m41.350158589s
	I0813 20:51:12.666746  233224 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.666841  233224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:12.669323  233224 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:13.198236  233224 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813204216-13784" rescaled to 1
	I0813 20:51:13.198297  233224 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:51:13.198331  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:13.200510  233224 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:13.198427  233224 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:13.200649  233224 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200666  233224 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200671  233224 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:13.200686  233224 addons.go:59] Setting dashboard=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.198561  233224 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:13.200707  233224 addons.go:135] Setting addon dashboard=true in "no-preload-20210813204216-13784"
	I0813 20:51:13.200710  233224 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813204216-13784"
	W0813 20:51:13.200714  233224 addons.go:147] addon dashboard should already be in state true
	I0813 20:51:13.200722  233224 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200733  233224 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:13.200743  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200748  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200588  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:13.200700  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200713  233224 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200905  233224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813204216-13784"
	I0813 20:51:13.201200  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201286  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201320  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201369  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.268820  233224 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.268850  233224 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:13.268885  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.269529  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.272105  233224 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.272280  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:13.276915  233224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:13.275633  233224 node_ready.go:49] node "no-preload-20210813204216-13784" has status "Ready":"True"
	I0813 20:51:13.277012  233224 node_ready.go:38] duration metric: took 4.87652ms waiting for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.277035  233224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:13.277050  233224 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.277062  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:13.277114  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.280067  233224 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.282273  233224 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:13.282349  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:13.282360  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:13.282428  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.288178  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:13.302483  233224 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.302581  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:13.302600  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:13.302672  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.364847  233224 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.364873  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:13.364933  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.394311  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.422036  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.432725  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.457704  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.517628  233224 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 20:51:13.528393  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.620168  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:13.620195  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:13.671071  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.681321  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:13.681356  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:13.689240  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:13.689265  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:13.774865  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:13.774905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:13.862937  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:13.862968  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:13.866582  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:13.866605  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:13.965927  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:13.965951  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:13.986024  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:14.070287  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:14.070319  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:14.189473  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:14.189565  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:14.364541  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:14.364569  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:14.492877  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:14.492905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:14.596170  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:14.596202  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:14.663166  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.134726824s)
	I0813 20:51:14.669029  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:15.296512  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.310389487s)
	I0813 20:51:15.296557  233224 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813204216-13784"
	I0813 20:51:15.375159  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.190525  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.521448806s)
	I0813 20:51:12.032028  271328 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:51:12.032292  271328 start.go:160] libmachine.API.Create for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:12.032325  271328 client.go:168] LocalClient.Create starting
	I0813 20:51:12.032388  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:51:12.032418  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032440  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032571  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:51:12.032593  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032613  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032954  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:51:12.084329  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:51:12.084421  271328 network_create.go:255] running [docker network inspect auto-20210813204009-13784] to gather additional debugging logs...
	I0813 20:51:12.084441  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784
	W0813 20:51:12.129703  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 returned with exit code 1
	I0813 20:51:12.129740  271328 network_create.go:258] error running [docker network inspect auto-20210813204009-13784]: docker network inspect auto-20210813204009-13784: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210813204009-13784
	I0813 20:51:12.129756  271328 network_create.go:260] output of [docker network inspect auto-20210813204009-13784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210813204009-13784
	
	** /stderr **
	I0813 20:51:12.129811  271328 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:12.181560  271328 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-e58530d1cbfd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c3:d4:16:b0}}
	I0813 20:51:12.182554  271328 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0003f6078] misses:0}
	I0813 20:51:12.182616  271328 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:51:12.182634  271328 network_create.go:106] attempt to create docker network auto-20210813204009-13784 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:51:12.182698  271328 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210813204009-13784
	I0813 20:51:12.265555  271328 network_create.go:90] docker network auto-20210813204009-13784 192.168.58.0/24 created
	I0813 20:51:12.265592  271328 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20210813204009-13784" container
	I0813 20:51:12.265659  271328 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:51:12.325195  271328 cli_runner.go:115] Run: docker volume create auto-20210813204009-13784 --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:51:12.375214  271328 oci.go:102] Successfully created a docker volume auto-20210813204009-13784
	I0813 20:51:12.375313  271328 cli_runner.go:115] Run: docker run --rm --name auto-20210813204009-13784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --entrypoint /usr/bin/test -v auto-20210813204009-13784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:51:13.255475  271328 oci.go:106] Successfully prepared a docker volume auto-20210813204009-13784
	W0813 20:51:13.255535  271328 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:51:13.255544  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:51:13.255605  271328 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:51:13.255907  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:13.255936  271328 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:51:13.256015  271328 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:51:13.443619  271328 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210813204009-13784 --name auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210813204009-13784 --network auto-20210813204009-13784 --ip 192.168.58.2 --volume auto-20210813204009-13784:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:51:14.118301  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Running}}
	I0813 20:51:14.185140  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:14.236626  271328 cli_runner.go:115] Run: docker exec auto-20210813204009-13784 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:51:14.394377  271328 oci.go:278] the created container "auto-20210813204009-13784" has a running status.
	I0813 20:51:14.394412  271328 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa...
	I0813 20:51:14.559698  271328 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:51:14.962022  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:15.017995  271328 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:51:15.018017  271328 kic_runner.go:115] Args: [docker exec --privileged auto-20210813204009-13784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:51:16.192846  233224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:16.192890  233224 addons.go:344] enableAddons completed in 2.994475177s
	I0813 20:51:17.804083  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.801657  271328 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.545595555s)
	I0813 20:51:17.801693  271328 kic.go:188] duration metric: took 4.545754 seconds to extract preloaded images to volume
	I0813 20:51:17.801770  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:17.842060  271328 machine.go:88] provisioning docker machine ...
	I0813 20:51:17.842103  271328 ubuntu.go:169] provisioning hostname "auto-20210813204009-13784"
	I0813 20:51:17.842167  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:17.880732  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:17.880934  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:17.880952  271328 main.go:130] libmachine: About to run SSH command:
	sudo hostname auto-20210813204009-13784 && echo "auto-20210813204009-13784" | sudo tee /etc/hostname
	I0813 20:51:18.049279  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: auto-20210813204009-13784
	
	I0813 20:51:18.049355  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.089070  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.089215  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.089233  271328 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210813204009-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210813204009-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210813204009-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:51:18.214361  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:51:18.214400  271328 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:51:18.214423  271328 ubuntu.go:177] setting up certificates
	I0813 20:51:18.214435  271328 provision.go:83] configureAuth start
	I0813 20:51:18.214499  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:18.257160  271328 provision.go:138] copyHostCerts
	I0813 20:51:18.257225  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:51:18.257232  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:51:18.257274  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:51:18.257345  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:51:18.257355  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:51:18.257373  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:51:18.257422  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:51:18.257430  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:51:18.257445  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:51:18.257520  271328 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.auto-20210813204009-13784 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210813204009-13784]
	I0813 20:51:18.405685  271328 provision.go:172] copyRemoteCerts
	I0813 20:51:18.405745  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:51:18.405785  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.445891  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:18.536412  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:51:18.553289  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0813 20:51:18.568793  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:51:18.583774  271328 provision.go:86] duration metric: configureAuth took 369.326679ms
	I0813 20:51:18.583798  271328 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:51:18.583946  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:18.584072  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.627524  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.627677  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.627697  271328 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:51:19.012135  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:51:19.012167  271328 machine.go:91] provisioned docker machine in 1.170081385s
	I0813 20:51:19.012178  271328 client.go:171] LocalClient.Create took 6.979844019s
	I0813 20:51:19.012195  271328 start.go:168] duration metric: libmachine.API.Create for "auto-20210813204009-13784" took 6.979905282s
	I0813 20:51:19.012204  271328 start.go:267] post-start starting for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:19.012215  271328 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:51:19.012274  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:51:19.012321  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.051463  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.148765  271328 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:51:19.151322  271328 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:51:19.151341  271328 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:51:19.151349  271328 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:51:19.151355  271328 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:51:19.151364  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:51:19.151409  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:51:19.151507  271328 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:51:19.151607  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:51:19.158200  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:19.176073  271328 start.go:270] post-start completed in 163.849198ms
	I0813 20:51:19.176519  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.224022  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:19.224268  271328 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:51:19.224328  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.265461  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.357703  271328 start.go:129] duration metric: createHost completed in 7.327939716s
	I0813 20:51:19.357731  271328 start.go:80] releasing machines lock for "auto-20210813204009-13784", held for 7.328093299s
	I0813 20:51:19.357829  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.403591  271328 ssh_runner.go:149] Run: systemctl --version
	I0813 20:51:19.403631  271328 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:51:19.403663  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.403725  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.454924  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.455089  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.690299  271328 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:51:19.711263  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:51:19.720449  271328 docker.go:153] disabling docker service ...
	I0813 20:51:19.720510  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:51:19.729566  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:51:19.738541  271328 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:51:19.809055  271328 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:51:19.878138  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:51:19.887210  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:51:19.901071  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.909825  271328 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:51:19.909855  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.918547  271328 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:51:19.925341  271328 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:51:19.925401  271328 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:51:19.932883  271328 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:51:19.939083  271328 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:51:20.008572  271328 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:51:20.019341  271328 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:51:20.019407  271328 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:51:20.022897  271328 start.go:413] Will wait 60s for crictl version
	I0813 20:51:20.022952  271328 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:51:20.049207  271328 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:51:20.049276  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.118062  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.185186  271328 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:51:20.185268  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:20.231193  271328 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:51:20.234527  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.243481  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:20.243537  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.298894  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.298920  271328 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:51:20.298967  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.326049  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.326070  271328 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:51:20.326138  271328 ssh_runner.go:149] Run: crio config
	I0813 20:51:20.405222  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:20.405254  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:20.405269  271328 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:51:20.405286  271328 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210813204009-13784 NodeName:auto-20210813204009-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:51:20.405450  271328 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "auto-20210813204009-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:51:20.406210  271328 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=auto-20210813204009-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:51:20.406291  271328 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:51:20.414073  271328 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:51:20.414143  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:51:20.420611  271328 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (556 bytes)
	I0813 20:51:20.432233  271328 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:51:20.443622  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0813 20:51:20.454650  271328 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:51:20.457221  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.467941  271328 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784 for IP: 192.168.58.2
	I0813 20:51:20.467993  271328 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:51:20.468013  271328 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:51:20.468073  271328 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key
	I0813 20:51:20.468084  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt with IP's: []
	I0813 20:51:20.834054  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt ...
	I0813 20:51:20.834092  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: {Name:mk7fec601fb1fafe5c23646db0e11a54596e8bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834267  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key ...
	I0813 20:51:20.834281  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key: {Name:mk1cae1776891d9f945556a388916d00049fb0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834361  271328 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041
	I0813 20:51:20.834373  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:51:21.063423  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 ...
	I0813 20:51:21.063459  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041: {Name:mk251c4f0d507b09ef6d31c1707428420ec85197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065611  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 ...
	I0813 20:51:21.065633  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041: {Name:mk4d38dae507bc9d1c850061ba3bdb1c6e2ca7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065723  271328 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt
	I0813 20:51:21.065806  271328 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key
	I0813 20:51:21.065871  271328 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key
	I0813 20:51:21.065883  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt with IP's: []
	I0813 20:51:21.152453  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt ...
	I0813 20:51:21.152481  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt: {Name:mke5a626b5b050e50bb47e400c3bba4f5fb88778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152637  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key ...
	I0813 20:51:21.152650  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key: {Name:mkb2a71eb086a15771297e8ab11e852569412fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152807  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:51:21.152843  271328 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:51:21.152855  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:51:21.152880  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:51:21.152909  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:51:21.152931  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:51:21.152971  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:21.153904  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:51:21.171484  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:51:21.187960  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:51:21.205911  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:51:21.223614  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:51:21.239905  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:51:21.255368  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:51:21.271028  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:51:21.286769  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:51:21.302428  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:51:21.317590  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:51:21.336580  271328 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:51:21.355880  271328 ssh_runner.go:149] Run: openssl version
	I0813 20:51:21.361210  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:51:21.368318  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371245  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371283  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.376426  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:51:21.384634  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:51:21.392048  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395072  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395113  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.400410  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:51:21.408727  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:51:21.415718  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418881  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418923  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.423802  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:51:21.431770  271328 kubeadm.go:390] StartCluster: {Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:21.431861  271328 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:51:21.431914  271328 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:21.455876  271328 cri.go:76] found id: ""
	I0813 20:51:21.455927  271328 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:51:21.463196  271328 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:21.471334  271328 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:21.471384  271328 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:21.478565  271328 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:21.478610  271328 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:18.862764  233224 pod_ready.go:92] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.862797  233224 pod_ready.go:81] duration metric: took 5.574582513s waiting for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.862817  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867642  233224 pod_ready.go:92] pod "coredns-78fcd69978-kbf57" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.867658  233224 pod_ready.go:81] duration metric: took 4.833167ms waiting for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867668  233224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:20.879817  233224 pod_ready.go:102] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:21.378531  233224 pod_ready.go:92] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.378554  233224 pod_ready.go:81] duration metric: took 2.510878118s waiting for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.378572  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382866  233224 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.382882  233224 pod_ready.go:81] duration metric: took 4.296091ms waiting for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382892  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386782  233224 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.386801  233224 pod_ready.go:81] duration metric: took 3.90189ms waiting for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386813  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390480  233224 pod_ready.go:92] pod "kube-proxy-vf22v" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.390494  233224 pod_ready.go:81] duration metric: took 3.672888ms waiting for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390501  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604404  233224 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.604433  233224 pod_ready.go:81] duration metric: took 213.923321ms waiting for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604445  233224 pod_ready.go:38] duration metric: took 8.327391702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:21.604469  233224 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:51:21.604523  233224 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:21.685434  233224 api_server.go:70] duration metric: took 8.487094951s to wait for apiserver process to appear ...
	I0813 20:51:21.685459  233224 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:51:21.685471  233224 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:51:21.691084  233224 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:51:21.691907  233224 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:51:21.691929  233224 api_server.go:129] duration metric: took 6.463677ms to wait for apiserver health ...
	I0813 20:51:21.691939  233224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:51:21.806833  233224 system_pods.go:59] 10 kube-system pods found
	I0813 20:51:21.806865  233224 system_pods.go:61] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:21.806872  233224 system_pods.go:61] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:21.806878  233224 system_pods.go:61] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:21.806884  233224 system_pods.go:61] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:21.806890  233224 system_pods.go:61] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:21.806897  233224 system_pods.go:61] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:21.806903  233224 system_pods.go:61] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:21.806909  233224 system_pods.go:61] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:21.806921  233224 system_pods.go:61] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:21.806947  233224 system_pods.go:61] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:21.806955  233224 system_pods.go:74] duration metric: took 115.009603ms to wait for pod list to return data ...
	I0813 20:51:21.806968  233224 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:51:22.003355  233224 default_sa.go:45] found service account: "default"
	I0813 20:51:22.003384  233224 default_sa.go:55] duration metric: took 196.403211ms for default service account to be created ...
	I0813 20:51:22.003397  233224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:51:22.206326  233224 system_pods.go:86] 10 kube-system pods found
	I0813 20:51:22.206359  233224 system_pods.go:89] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:22.206368  233224 system_pods.go:89] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:22.206376  233224 system_pods.go:89] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:22.206382  233224 system_pods.go:89] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:22.206390  233224 system_pods.go:89] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:22.206398  233224 system_pods.go:89] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:22.206407  233224 system_pods.go:89] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:22.206414  233224 system_pods.go:89] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:22.206428  233224 system_pods.go:89] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:22.206438  233224 system_pods.go:89] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:22.206451  233224 system_pods.go:126] duration metric: took 203.046705ms to wait for k8s-apps to be running ...
	I0813 20:51:22.206463  233224 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:51:22.206511  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:22.263444  233224 system_svc.go:56] duration metric: took 56.96766ms WaitForService to wait for kubelet.
	I0813 20:51:22.263482  233224 kubeadm.go:547] duration metric: took 9.065148102s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:51:22.263519  233224 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:51:22.403039  233224 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:51:22.403065  233224 node_conditions.go:123] node cpu capacity is 8
	I0813 20:51:22.403081  233224 node_conditions.go:105] duration metric: took 139.554694ms to run NodePressure ...
	I0813 20:51:22.403096  233224 start.go:231] waiting for startup goroutines ...
	I0813 20:51:22.450275  233224 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:22.455408  233224 out.go:177] 
	W0813 20:51:22.455568  233224 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:22.462541  233224 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:22.464230  233224 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813204216-13784" cluster and "default" namespace by default
	I0813 20:51:21.794120  271328 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:25.163675  271328 out.go:204]   - Booting up control plane ...
	I0813 20:51:28.722579  240241 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.346300289s)
	I0813 20:51:28.722667  240241 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:28.732254  240241 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:28.732318  240241 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:28.757337  240241 cri.go:76] found id: ""
	I0813 20:51:28.757392  240241 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:28.764551  240241 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:28.764599  240241 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:28.771196  240241 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:28.771247  240241 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:29.067432  240241 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:29.947085  240241 out.go:204]   - Booting up control plane ...
	I0813 20:51:40.720555  271328 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:41.136233  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:41.136257  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:41.138470  271328 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:41.138531  271328 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:41.142093  271328 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:41.142114  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:41.159919  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:43.999786  240241 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:44.412673  240241 cni.go:93] Creating CNI manager for ""
	I0813 20:51:44.412698  240241 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:44.414497  240241 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:44.414556  240241 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:44.418236  240241 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:44.418253  240241 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:44.430863  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:41.568473  271328 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:41.568595  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:41.568620  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=auto-20210813204009-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:41.684391  271328 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:41.684482  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:42.252918  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:42.753184  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:43.253340  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:43.752498  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.252543  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.752811  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.253371  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.753399  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.252813  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.663289  240241 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:44.663354  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813204407-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_44_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.663359  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.785476  240241 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:44.785625  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.361034  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.860496  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.360813  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.861457  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.360900  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.860847  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.361284  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.860717  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.361233  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.753324  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.622147  271328 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.868786003s)
	I0813 20:51:48.753354  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.860593  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.861309  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.361330  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.860839  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.360530  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.261881  271328 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.5084884s)
	I0813 20:51:52.752569  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.253464  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.753088  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.252748  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.752605  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.253338  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.752990  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.253395  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.860519  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.360704  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.861401  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.360874  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.861184  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.935142  240241 kubeadm.go:985] duration metric: took 12.271847359s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:56.935173  240241 kubeadm.go:392] StartCluster complete in 5m59.56574911s
	I0813 20:51:56.935192  240241 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:56.935280  240241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:56.936618  240241 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.471369  240241 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813204407-13784" rescaled to 1
	I0813 20:51:57.471434  240241 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:57.473147  240241 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:57.473200  240241 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:57.471473  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:57.471495  240241 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:57.473309  240241 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473332  240241 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473329  240241 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.473341  240241 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:57.473359  240241 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473373  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.471677  240241 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:57.473389  240241 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473397  240241 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473415  240241 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473418  240241 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.473375  240241 addons.go:147] addon dashboard should already be in state true
	W0813 20:51:57.473430  240241 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:57.473453  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.473469  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.473755  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473923  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473970  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473984  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.500075  240241 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813204407-13784" to be "Ready" ...
	I0813 20:51:57.508390  240241 node_ready.go:49] node "default-k8s-different-port-20210813204407-13784" has status "Ready":"True"
	I0813 20:51:57.508412  240241 node_ready.go:38] duration metric: took 8.303909ms waiting for node "default-k8s-different-port-20210813204407-13784" to be "Ready" ...
	I0813 20:51:57.508425  240241 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:57.530074  240241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:57.559993  240241 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:57.561443  240241 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:56.753159  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.252816  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.323178  271328 kubeadm.go:985] duration metric: took 15.754657804s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:57.323205  271328 kubeadm.go:392] StartCluster complete in 35.891441868s
	I0813 20:51:57.323233  271328 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.323334  271328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:57.325280  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.844496  271328 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20210813204009-13784" rescaled to 1
	I0813 20:51:57.844542  271328 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:57.847125  271328 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:57.847179  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:57.844600  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:57.844628  271328 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:51:57.844773  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:57.847273  271328 addons.go:59] Setting storage-provisioner=true in profile "auto-20210813204009-13784"
	I0813 20:51:57.847289  271328 addons.go:135] Setting addon storage-provisioner=true in "auto-20210813204009-13784"
	W0813 20:51:57.847298  271328 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:57.847304  271328 addons.go:59] Setting default-storageclass=true in profile "auto-20210813204009-13784"
	I0813 20:51:57.847325  271328 host.go:66] Checking if "auto-20210813204009-13784" exists ...
	I0813 20:51:57.847330  271328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20210813204009-13784"
	I0813 20:51:57.847657  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.847848  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.914584  271328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:57.914695  271328 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.914708  271328 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:57.914767  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:57.926636  271328 addons.go:135] Setting addon default-storageclass=true in "auto-20210813204009-13784"
	W0813 20:51:57.926670  271328 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:57.926704  271328 host.go:66] Checking if "auto-20210813204009-13784" exists ...
	I0813 20:51:57.927086  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.944440  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:57.946970  271328 node_ready.go:35] waiting up to 5m0s for node "auto-20210813204009-13784" to be "Ready" ...
	I0813 20:51:57.951330  271328 node_ready.go:49] node "auto-20210813204009-13784" has status "Ready":"True"
	I0813 20:51:57.951353  271328 node_ready.go:38] duration metric: took 4.355543ms waiting for node "auto-20210813204009-13784" to be "Ready" ...
	I0813 20:51:57.951367  271328 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:57.964918  271328 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:57.974587  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:57.995812  271328 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:57.995845  271328 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:57.995903  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:58.104226  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:58.127261  271328 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:58.207306  271328 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:58.318052  271328 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 20:51:57.560121  240241 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.562962  240241 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:57.563043  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:57.563058  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:57.563087  240241 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:57.563122  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.563145  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:57.563156  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:57.563204  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.563285  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:57.563317  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.585350  240241 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.585389  240241 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:57.585423  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.586491  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.640285  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.643118  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.651320  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.655597  240241 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:57.655617  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:57.655661  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.659397  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:57.708263  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.772822  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:57.772851  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:57.775665  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:57.775686  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:57.778938  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.866896  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:57.866921  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:57.875909  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:57.875935  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:57.895465  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:57.895493  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:57.906579  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:57.906602  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:57.958953  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:57.977795  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:57.977819  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:57.988125  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:58.065141  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:58.065163  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:58.173899  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:58.173923  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:58.280880  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:58.280914  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:58.289511  240241 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0813 20:51:58.375994  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:58.376079  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:58.488006  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:58.488037  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:58.562447  240241 pod_ready.go:97] error getting pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-94bmz" not found
	I0813 20:51:58.562481  240241 pod_ready.go:81] duration metric: took 1.032368127s waiting for pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:58.562494  240241 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-94bmz" not found
	I0813 20:51:58.562502  240241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:58.578755  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:59.569598  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.79061998s)
	I0813 20:51:59.658034  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69903678s)
	I0813 20:51:59.658141  240241 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:59.658099  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.669942348s)
	I0813 20:52:00.558702  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.979881854s)
	I0813 20:51:58.812728  271328 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:51:58.812772  271328 addons.go:344] enableAddons completed in 968.157461ms
	I0813 20:51:59.995308  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:00.560716  240241 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0813 20:52:00.560785  240241 addons.go:344] enableAddons completed in 3.089294462s
	I0813 20:52:00.667954  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:03.098119  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:02.492544  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:04.992816  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:05.098856  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:07.099285  240241 pod_ready.go:92] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.099314  240241 pod_ready.go:81] duration metric: took 8.536802711s waiting for pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.099327  240241 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.103649  240241 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.103672  240241 pod_ready.go:81] duration metric: took 4.335636ms waiting for pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.103690  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.107793  240241 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.107812  240241 pod_ready.go:81] duration metric: took 4.11268ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.107827  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.114439  240241 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.114457  240241 pod_ready.go:81] duration metric: took 6.620724ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.114469  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5hsp" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.118338  240241 pod_ready.go:92] pod "kube-proxy-f5hsp" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.118352  240241 pod_ready.go:81] duration metric: took 3.876581ms waiting for pod "kube-proxy-f5hsp" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.118361  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.496572  240241 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.496591  240241 pod_ready.go:81] duration metric: took 378.224297ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.496599  240241 pod_ready.go:38] duration metric: took 9.98816095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:07.496618  240241 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:52:07.496655  240241 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:52:07.520058  240241 api_server.go:70] duration metric: took 10.048585682s to wait for apiserver process to appear ...
	I0813 20:52:07.520082  240241 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:52:07.520092  240241 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0813 20:52:07.524876  240241 api_server.go:265] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0813 20:52:07.525872  240241 api_server.go:139] control plane version: v1.21.3
	I0813 20:52:07.525891  240241 api_server.go:129] duration metric: took 5.802306ms to wait for apiserver health ...
	I0813 20:52:07.525914  240241 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:52:07.699622  240241 system_pods.go:59] 9 kube-system pods found
	I0813 20:52:07.699655  240241 system_pods.go:61] "coredns-558bd4d5db-gr5g8" [de1c554f-b8e1-49db-9f29-54dd16296938] Running
	I0813 20:52:07.699660  240241 system_pods.go:61] "etcd-default-k8s-different-port-20210813204407-13784" [fa938417-d881-4a17-bc90-cea2f1e1eb92] Running
	I0813 20:52:07.699664  240241 system_pods.go:61] "kindnet-xg9rd" [b422811b-55ea-4928-823d-0cf4e7b32f3e] Running
	I0813 20:52:07.699669  240241 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813204407-13784" [4b14160e-22dd-4c1f-b1a6-8a15f3d33d36] Running
	I0813 20:52:07.699673  240241 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813204407-13784" [bbac6064-38a3-4fc1-a3fe-61fc1820b250] Running
	I0813 20:52:07.699677  240241 system_pods.go:61] "kube-proxy-f5hsp" [1aed7397-275e-474d-9287-2624feb99c42] Running
	I0813 20:52:07.699681  240241 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813204407-13784" [0ce6078f-1102-4c32-883d-1a4f95d0f4cc] Running
	I0813 20:52:07.699689  240241 system_pods.go:61] "metrics-server-7c784ccb57-44694" [38bf7ccc-7705-4237-a220-b3b4e39f962d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:07.699694  240241 system_pods.go:61] "storage-provisioner" [d9ea2a9c-7cd6-4367-8335-5bdc8cfbcba1] Running
	I0813 20:52:07.699700  240241 system_pods.go:74] duration metric: took 173.777118ms to wait for pod list to return data ...
	I0813 20:52:07.699714  240241 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:52:07.897248  240241 default_sa.go:45] found service account: "default"
	I0813 20:52:07.897273  240241 default_sa.go:55] duration metric: took 197.547768ms for default service account to be created ...
	I0813 20:52:07.897282  240241 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:52:08.100655  240241 system_pods.go:86] 9 kube-system pods found
	I0813 20:52:08.100687  240241 system_pods.go:89] "coredns-558bd4d5db-gr5g8" [de1c554f-b8e1-49db-9f29-54dd16296938] Running
	I0813 20:52:08.100696  240241 system_pods.go:89] "etcd-default-k8s-different-port-20210813204407-13784" [fa938417-d881-4a17-bc90-cea2f1e1eb92] Running
	I0813 20:52:08.100705  240241 system_pods.go:89] "kindnet-xg9rd" [b422811b-55ea-4928-823d-0cf4e7b32f3e] Running
	I0813 20:52:08.100712  240241 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210813204407-13784" [4b14160e-22dd-4c1f-b1a6-8a15f3d33d36] Running
	I0813 20:52:08.100721  240241 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210813204407-13784" [bbac6064-38a3-4fc1-a3fe-61fc1820b250] Running
	I0813 20:52:08.100727  240241 system_pods.go:89] "kube-proxy-f5hsp" [1aed7397-275e-474d-9287-2624feb99c42] Running
	I0813 20:52:08.100734  240241 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210813204407-13784" [0ce6078f-1102-4c32-883d-1a4f95d0f4cc] Running
	I0813 20:52:08.100746  240241 system_pods.go:89] "metrics-server-7c784ccb57-44694" [38bf7ccc-7705-4237-a220-b3b4e39f962d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:08.100756  240241 system_pods.go:89] "storage-provisioner" [d9ea2a9c-7cd6-4367-8335-5bdc8cfbcba1] Running
	I0813 20:52:08.100771  240241 system_pods.go:126] duration metric: took 203.483249ms to wait for k8s-apps to be running ...
	I0813 20:52:08.100783  240241 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:52:08.100832  240241 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:08.111772  240241 system_svc.go:56] duration metric: took 10.982724ms WaitForService to wait for kubelet.
	I0813 20:52:08.111793  240241 kubeadm.go:547] duration metric: took 10.64032656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:52:08.111828  240241 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:52:08.297054  240241 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:52:08.297080  240241 node_conditions.go:123] node cpu capacity is 8
	I0813 20:52:08.297097  240241 node_conditions.go:105] duration metric: took 185.262995ms to run NodePressure ...
	I0813 20:52:08.297110  240241 start.go:231] waiting for startup goroutines ...
	I0813 20:52:08.342344  240241 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:52:08.344774  240241 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210813204407-13784" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:50:48 UTC, end at Fri 2021-08-13 20:52:09 UTC. --
	Aug 13 20:51:11 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:11.873644530Z" level=info msg="Starting container: 242db25cc9ec38e4dfbb4bc1b78267e847c6525b3d52669618f415e5a43d959d" id=3d8a4b98-8c45-458a-b6b7-1ccea9073a35 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:11 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:11.957641710Z" level=info msg="Started container 242db25cc9ec38e4dfbb4bc1b78267e847c6525b3d52669618f415e5a43d959d: kube-system/coredns-78fcd69978-xd22x/coredns" id=3d8a4b98-8c45-458a-b6b7-1ccea9073a35 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:11 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:11.958453573Z" level=info msg="Removed container e3e1b2da2019db53ed308a9e9ce6ef81713aa2a7286fd53705936a2e77521f74: kube-system/storage-provisioner/storage-provisioner" id=8716b6b2-c737-4d10-8bac-e195c1d04226 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:51:12 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:12.070764762Z" level=info msg="Created container f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6: kube-system/kube-proxy-hkksk/kube-proxy" id=d980e2bd-3665-4f9d-9c8d-314584e8cb63 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:12 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:12.071418788Z" level=info msg="Starting container: f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6" id=9e5a6373-083f-46cd-8120-796059e9c0b5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:12 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:12.088181242Z" level=info msg="Started container f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6: kube-system/kube-proxy-hkksk/kube-proxy" id=9e5a6373-083f-46cd-8120-796059e9c0b5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.368284816Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.377884119Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.385915709Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.395374049Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.415399454Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.415442697Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.415470266Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.430482733Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.434385349Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.437713652Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.456889442Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.456941187Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.456984919Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.457065485Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.462800794Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.465718476Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.469603372Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.501282308Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 crio[243]: time="2021-08-13 20:51:13.501320397Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	f659084bf3913       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c   57 seconds ago       Running             kube-proxy                1                   cc0735a93051f
	242db25cc9ec3       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44   58 seconds ago       Running             coredns                   0                   49a26db5d2c2d
	d62beeb118a98       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   58 seconds ago       Running             kindnet-cni               1                   0a37ef49aecac
	5cf3304e920ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   58 seconds ago       Exited              storage-provisioner       2                   86deab179c448
	6d9e66e7c07b8       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75   About a minute ago   Running             kube-scheduler            1                   fa9c255323766
	3b151ac588863       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c   About a minute ago   Running             kube-controller-manager   1                   773e4be2f87a8
	f73f74e7523c0       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a   About a minute ago   Running             kube-apiserver            1                   708b20952a2df
	6254822604de2       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba   About a minute ago   Running             etcd                      1                   638a6b90a9594
	
	* 
	* ==> coredns [242db25cc9ec38e4dfbb4bc1b78267e847c6525b3d52669618f415e5a43d959d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +1.787836] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[ +14.060065] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth132654c8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d2 33 13 cb 90 7c 08 06        .......3...|..
	[  +0.492422] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0537654e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 56 dc 40 69 33 08 06        .......V.@i3..
	[Aug13 20:52] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth42b216bb
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 75 7c 88 de fd 08 06        .......u|.....
	[  +0.348033] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth3a91f4fd
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1e a0 d8 e2 a6 b4 08 06        ..............
	[  +7.435044] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +5.490524] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000025] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +2.047860] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000002] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +4.034563] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd8e69602
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 93 4a 9f fb 2d 08 06        ........J..-..
	[  +3.841985] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth22aecc4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 46 ba a1 61 ad 12 08 06        ......F..a....
	[  +7.179465] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.694631] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [6254822604de291d71c4f3126bc028a40c85d5466257614648d572ffe55992aa] <==
	* {"level":"info","ts":"2021-08-13T20:51:04.571Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-13T20:51:04.573Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.0","cluster-id":"6f20f2c4b2fb5f8a","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:04.573Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-13T20:51:04.574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2021-08-13T20:51:04.574Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2021-08-13T20:51:04.574Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-13T20:51:04.575Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-13T20:51:04.576Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-13T20:51:04.576Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-13T20:51:04.576Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2021-08-13T20:51:04.576Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2021-08-13T20:51:04.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20210813204926-13784 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T20:51:04.867Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T20:51:04.869Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2021-08-13T20:51:04.869Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:53:10 up  1:35,  0 users,  load average: 3.23, 2.78, 2.33
	Linux newest-cni-20210813204926-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f73f74e7523c018e7399db61b0c77d3dbce4b56844ab358ee6d86318a7f3adb6] <==
	* E0813 20:53:08.481841       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:53:08.483003       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:53:08.484112       1 trace.go:205] Trace[55432599]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:40f18ee5-759d-491b-b9e0-6cd79a1dd04d,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:52:08.479) (total time: 60004ms):
	Trace[55432599]: [1m0.00435894s] [1m0.00435894s] END
	E0813 20:53:08.487463       1 timeout.go:135] post-timeout activity - time-elapsed: 7.159037ms, GET "/api/v1/namespaces/kube-system" result: <nil>
	E0813 20:53:08.784383       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0813 20:53:08.784538       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:53:08.784865       1 storage_flowcontrol.go:136] "APF bootstrap ensurer ran into error, will retry later" err="failed ensuring suggested settings - failed to retrieve FlowSchema type=suggested name=\"system-nodes\" error=the server was unable to return a response in the time allotted, but may still be processing the request (get flowschemas.flowcontrol.apiserver.k8s.io system-nodes)"
	E0813 20:53:08.785809       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:53:08.786958       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:53:08.788108       1 trace.go:205] Trace[1618851523]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-nodes,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:1d704b53-bcf1-489c-86c8-474d7408bcf7,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:52:08.783) (total time: 60004ms):
	Trace[1618851523]: [1m0.004494534s] [1m0.004494534s] END
	E0813 20:53:08.788306       1 timeout.go:135] post-timeout activity - time-elapsed: 3.738068ms, GET "/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-nodes" result: <nil>
	W0813 20:53:09.016024       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:09.952486       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0813 20:53:10.025654       1 trace.go:205] Trace[202648913]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (13-Aug-2021 20:52:10.026) (total time: 59999ms):
	Trace[202648913]: [59.999593732s] [59.999593732s] END
	E0813 20:53:10.025686       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0813 20:53:10.025739       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:53:10.028370       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:53:10.029581       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:53:10.030760       1 trace.go:205] Trace[989323138]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:74011544-58cd-4b87-8795-5016f9de9656,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:52:10.025) (total time: 60004ms):
	Trace[989323138]: [1m0.004717333s] [1m0.004717333s] END
	E0813 20:53:10.031496       1 timeout.go:135] post-timeout activity - time-elapsed: 5.718249ms, GET "/api/v1/nodes" result: <nil>
	W0813 20:53:10.083539       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	
	* 
	* ==> kube-controller-manager [3b151ac588863ccbbe66e0bd5c7ec4eb98e7de2368b68a042cec308fb9fcad5c] <==
	* I0813 20:51:12.528150       1 controller.go:170] Starting ephemeral volume controller
	I0813 20:51:12.528160       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I0813 20:51:12.578791       1 controllermanager.go:577] Started "endpoint"
	I0813 20:51:12.578864       1 endpoints_controller.go:195] Starting endpoint controller
	I0813 20:51:12.578872       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0813 20:51:12.628601       1 controllermanager.go:577] Started "replicationcontroller"
	I0813 20:51:12.628683       1 replica_set.go:186] Starting replicationcontroller controller
	I0813 20:51:12.628691       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0813 20:51:12.683043       1 controllermanager.go:577] Started "deployment"
	I0813 20:51:12.683113       1 deployment_controller.go:153] "Starting controller" controller="deployment"
	I0813 20:51:12.683249       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0813 20:51:12.728206       1 controllermanager.go:577] Started "csrcleaner"
	I0813 20:51:12.728248       1 cleaner.go:82] Starting CSR cleaner controller
	I0813 20:51:12.784526       1 controllermanager.go:577] Started "tokencleaner"
	I0813 20:51:12.784624       1 tokencleaner.go:118] Starting token cleaner controller
	I0813 20:51:12.784638       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0813 20:51:12.784648       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0813 20:51:12.828713       1 controllermanager.go:577] Started "persistentvolume-binder"
	I0813 20:51:12.828774       1 pv_controller_base.go:308] Starting persistent volume controller
	I0813 20:51:12.828782       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	E0813 20:51:12.988049       1 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0813 20:51:12.988173       1 controllermanager.go:577] Started "namespace"
	I0813 20:51:12.988213       1 namespace_controller.go:200] Starting namespace controller
	I0813 20:51:12.988243       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0813 20:51:13.028272       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-proxy [f659084bf3913fff64bcbaf42a29fe706d66560c39314099aef88bea2f54c7c6] <==
	* I0813 20:51:12.188589       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0813 20:51:12.188651       1 server_others.go:140] Detected node IP 192.168.76.2
	W0813 20:51:12.188674       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:51:12.312301       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:51:12.312448       1 server_others.go:212] Using iptables Proxier.
	I0813 20:51:12.312468       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:51:12.312517       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:51:12.312915       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:51:12.316264       1 config.go:315] Starting service config controller
	I0813 20:51:12.316302       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:51:12.316585       1 config.go:224] Starting endpoint slice config controller
	I0813 20:51:12.316593       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 20:51:12.375412       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210813204926-13784.169af8e3c030d9c1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4b012b0530e, ext:224480708, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210813204926-13784", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"newest-cni-20210813204926-13784", UID:"newest-cni-20210813204926-13784", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210813204926-13784.169af8e3c030d9c1" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:51:12.416783       1 shared_informer.go:247] Caches are synced for service config 
	I0813 20:51:12.466435       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [6d9e66e7c07b8532bae3df4efaa7ce0d7585e57894154a5bc5ad692e5c2b97e4] <==
	* W0813 20:51:04.767365       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0813 20:51:05.872122       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:51:08.470107       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:51:08.470340       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:51:08.470499       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:51:08.470553       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:51:08.483494       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 20:51:08.483568       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:51:08.483814       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0813 20:51:08.483896       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0813 20:51:08.584599       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:50:48 UTC, end at Fri 2021-08-13 20:53:10 UTC. --
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529592     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5973d85-0779-4522-be43-0672048bb97e-lib-modules\") pod \"kube-proxy-hkksk\" (UID: \"e5973d85-0779-4522-be43-0672048bb97e\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529634     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ksc2\" (UniqueName: \"kubernetes.io/projected/e5973d85-0779-4522-be43-0672048bb97e-kube-api-access-2ksc2\") pod \"kube-proxy-hkksk\" (UID: \"e5973d85-0779-4522-be43-0672048bb97e\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529661     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr7kc\" (UniqueName: \"kubernetes.io/projected/747a5773-8b7e-4a2a-bc63-f583e8d66484-kube-api-access-jr7kc\") pod \"storage-provisioner\" (UID: \"747a5773-8b7e-4a2a-bc63-f583e8d66484\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529688     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5973d85-0779-4522-be43-0672048bb97e-xtables-lock\") pod \"kube-proxy-hkksk\" (UID: \"e5973d85-0779-4522-be43-0672048bb97e\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529718     811 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2329f1a-00ca-47a4-8c4c-303a94a4252b-lib-modules\") pod \"kindnet-kj9c8\" (UID: \"b2329f1a-00ca-47a4-8c4c-303a94a4252b\") "
	Aug 13 20:51:09 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:09.529755     811 reconciler.go:157] "Reconciler: start to sync state"
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:10.665514     811 request.go:665] Waited for 1.034182122s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:10.681841     811 scope.go:110] "RemoveContainer" containerID="e3e1b2da2019db53ed308a9e9ce6ef81713aa2a7286fd53705936a2e77521f74"
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:10.964718     811 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:10.964801     811 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:10.965341     811 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ss5pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{
Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-w86ts_kube-system(0aa37bf6-f816-4d14-a768-96ed42e5eaff): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 13 20:51:10 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:10.965432     811 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-w86ts" podUID=0aa37bf6-f816-4d14-a768-96ed42e5eaff
	Aug 13 20:51:11 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:11.763763     811 scope.go:110] "RemoveContainer" containerID="e3e1b2da2019db53ed308a9e9ce6ef81713aa2a7286fd53705936a2e77521f74"
	Aug 13 20:51:11 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:11.764145     811 scope.go:110] "RemoveContainer" containerID="5cf3304e920ea2ce118869d3ede6bac32c48015da75c58f1888d62ec7c002759"
	Aug 13 20:51:11 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:11.764406     811 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(747a5773-8b7e-4a2a-bc63-f583e8d66484)\"" pod="kube-system/storage-provisioner" podUID=747a5773-8b7e-4a2a-bc63-f583e8d66484
	Aug 13 20:51:11 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:11.767888     811 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-w86ts" podUID=0aa37bf6-f816-4d14-a768-96ed42e5eaff
	Aug 13 20:51:12 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:12.773983     811 scope.go:110] "RemoveContainer" containerID="5cf3304e920ea2ce118869d3ede6bac32c48015da75c58f1888d62ec7c002759"
	Aug 13 20:51:12 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:12.774328     811 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(747a5773-8b7e-4a2a-bc63-f583e8d66484)\"" pod="kube-system/storage-provisioner" podUID=747a5773-8b7e-4a2a-bc63-f583e8d66484
	Aug 13 20:51:13 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:13.602447     811 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9debee84-4a17-4c6f-9b60-5c63cfd30411 path="/var/lib/kubelet/pods/9debee84-4a17-4c6f-9b60-5c63cfd30411/volumes"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 kubelet[811]: E0813 20:51:13.616145     811 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/docker/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c\": RecentStats: unable to find data in memory cache], [\"/docker\": RecentStats: unable to find data in memory cache], [\"/docker/b6f86c7573af46980f1dd61267643eb1c9d7cff61861bc16b4ca5f622836c68c/docker\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:13.785581     811 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 kubelet[811]: I0813 20:51:13.933294     811 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 20:51:13 newest-cni-20210813204926-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:51:13 newest-cni-20210813204926-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:51:13 newest-cni-20210813204926-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [5cf3304e920ea2ce118869d3ede6bac32c48015da75c58f1888d62ec7c002759] <==
	* I0813 20:51:11.369892       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0813 20:51:11.373764       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:53:10.029527  282097 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (116.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (111.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210813204216-13784 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-20210813204216-13784 --alsologtostderr -v=1: exit status 80 (2.070873368s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-20210813204216-13784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:51:41.490377  278291 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:51:41.490447  278291 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:41.490451  278291 out.go:311] Setting ErrFile to fd 2...
	I0813 20:51:41.490454  278291 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:41.490559  278291 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:51:41.490760  278291 out.go:305] Setting JSON to false
	I0813 20:51:41.490777  278291 mustload.go:65] Loading cluster: no-preload-20210813204216-13784
	I0813 20:51:41.491106  278291 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:41.491557  278291 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:41.540772  278291 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:41.541639  278291 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-20210813204216-13784 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:51:41.543909  278291 out.go:177] * Pausing node no-preload-20210813204216-13784 ... 
	I0813 20:51:41.543945  278291 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:41.544224  278291 ssh_runner.go:149] Run: systemctl --version
	I0813 20:51:41.544273  278291 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:41.599889  278291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:41.701182  278291 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:41.710636  278291 pause.go:50] kubelet running: true
	I0813 20:51:41.710694  278291 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:51:41.862281  278291 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:51:41.862376  278291 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:51:41.930918  278291 cri.go:76] found id: "78940ce7ea25ea4289edad9d81a0e04be6b4b00989fd85859af5794c53d789f6"
	I0813 20:51:41.930954  278291 cri.go:76] found id: "da11f623096a44837f25959d7877160cea42f70c3f21e592a2fee58e6911bedd"
	I0813 20:51:41.930961  278291 cri.go:76] found id: "504ce37aeae71f22a0f050a155ae8bec691ed8ce5e35f0c0518d3237d9826f88"
	I0813 20:51:41.930967  278291 cri.go:76] found id: "9d61a3e5c94c90b74a98eb9e43ec6b96d9929d13e2bfc65bd3805f912a57a9e8"
	I0813 20:51:41.930972  278291 cri.go:76] found id: "13346f79e0b1a0b1a696ae046ce04970accd7893fac6317c3f322cdb16028bd0"
	I0813 20:51:41.930977  278291 cri.go:76] found id: "3575fcd4de6e5072e4942339287b773ba8c67eda42193f2a9553e51c1bb336cd"
	I0813 20:51:41.930980  278291 cri.go:76] found id: "b9a8b46e0a44906e80af7b7fe48165ba7a021fdc512db2d3a043f636e20feb0e"
	I0813 20:51:41.930984  278291 cri.go:76] found id: "2ed1a101fdc2de06b7ccca482af6efe456ca1dfd0df0793a7496d57e53a5d09a"
	I0813 20:51:41.930987  278291 cri.go:76] found id: "aef6ebbbe8620042fc604d68961705dd4b2b333af7aff651c0c15d0a72d455fe"
	I0813 20:51:41.930995  278291 cri.go:76] found id: "acd231be03d703151899011383876b333c8c8a75608cec92be237de868e25638"
	I0813 20:51:41.930999  278291 cri.go:76] found id: "ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c"
	I0813 20:51:41.931003  278291 cri.go:76] found id: "63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	I0813 20:51:41.931007  278291 cri.go:76] found id: ""
	I0813 20:51:41.931044  278291 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p no-preload-20210813204216-13784 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210813204216-13784
helpers_test.go:236: (dbg) docker inspect no-preload-20210813204216-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c",
	        "Created": "2021-08-13T20:42:19.249594641Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233512,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:45:25.060722245Z",
	            "FinishedAt": "2021-08-13T20:45:22.690105537Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/hosts",
	        "LogPath": "/var/lib/docker/containers/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c-json.log",
	        "Name": "/no-preload-20210813204216-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210813204216-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210813204216-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/416ed91688eef7fd372daf2da68950783a07dfc25b0bfb3dce4d54b5cac624e6-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/416ed91688eef7fd372daf2da68950783a07dfc25b0bfb3dce4d54b5cac624e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/416ed91688eef7fd372daf2da68950783a07dfc25b0bfb3dce4d54b5cac624e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/416ed91688eef7fd372daf2da68950783a07dfc25b0bfb3dce4d54b5cac624e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210813204216-13784",
	                "Source": "/var/lib/docker/volumes/no-preload-20210813204216-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210813204216-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210813204216-13784",
	                "name.minikube.sigs.k8s.io": "no-preload-20210813204216-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a10346a8dda94d1eb351409cac8253f614953044c4be23fa2f3e796c6cec4a58",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32941"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32943"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32942"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a10346a8dda9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210813204216-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "472ef6c90d7b"
	                    ],
	                    "NetworkID": "e58530d1cbfdb6ce18e1d2e9fb761572954ee4ce5a9dfaf840d323eece84d305",
	                    "EndpointID": "7b186cffb514a2ef1a4e845fc6982cf4750375f257599b88cab47f93f5916ab7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204216-13784 -n no-preload-20210813204216-13784

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204216-13784 -n no-preload-20210813204216-13784: exit status 2 (17.333454063s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:52:00.881165  278745 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210813204216-13784 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p no-preload-20210813204216-13784 logs -n 25: exit status 110 (15.242276969s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:45:49 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:42:15 UTC | Fri, 13 Aug 2021 20:47:32 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:44 UTC | Fri, 13 Aug 2021 20:47:45 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:19 UTC | Fri, 13 Aug 2021 20:49:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:21 UTC | Fri, 13 Aug 2021 20:49:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:50:24 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:24 UTC | Fri, 13 Aug 2021 20:50:25 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:25 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:50:50 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:00 UTC | Fri, 13 Aug 2021 20:51:00 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:03 UTC | Fri, 13 Aug 2021 20:51:03 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:05 UTC | Fri, 13 Aug 2021 20:51:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:06 UTC | Fri, 13 Aug 2021 20:51:10 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:11 UTC | Fri, 13 Aug 2021 20:51:11 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:51:12 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:13 UTC | Fri, 13 Aug 2021 20:51:13 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:51:22 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:41 UTC | Fri, 13 Aug 2021 20:51:41 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:51:11
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:51:11.626877  271328 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:51:11.627052  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627060  271328 out.go:311] Setting ErrFile to fd 2...
	I0813 20:51:11.627064  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627159  271328 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:51:11.627409  271328 out.go:305] Setting JSON to false
	I0813 20:51:11.666661  271328 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5634,"bootTime":1628882237,"procs":328,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:51:11.666785  271328 start.go:121] virtualization: kvm guest
	I0813 20:51:11.669469  271328 out.go:177] * [auto-20210813204009-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:51:11.669645  271328 notify.go:169] Checking for updates...
	I0813 20:51:11.671319  271328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:11.672833  271328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:51:11.674351  271328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:51:11.675913  271328 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:51:11.676594  271328 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:11.676833  271328 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.676967  271328 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.677023  271328 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:51:11.731497  271328 docker.go:132] docker version: linux-19.03.15
	I0813 20:51:11.731582  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.824730  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.775305956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.824827  271328 docker.go:244] overlay module found
	I0813 20:51:11.826307  271328 out.go:177] * Using the docker driver based on user configuration
	I0813 20:51:11.826332  271328 start.go:278] selected driver: docker
	I0813 20:51:11.826337  271328 start.go:751] validating driver "docker" against <nil>
	I0813 20:51:11.826355  271328 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:51:11.826409  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:51:11.826435  271328 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:51:11.827724  271328 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:51:11.828584  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.921127  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.870452453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.921281  271328 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:51:11.921463  271328 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:51:11.921497  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:11.921506  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:11.921514  271328 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:51:11.921523  271328 start_flags.go:277] config:
	{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:11.924012  271328 out.go:177] * Starting control plane node auto-20210813204009-13784 in cluster auto-20210813204009-13784
	I0813 20:51:11.924056  271328 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:51:11.925270  271328 out.go:177] * Pulling base image ...
	I0813 20:51:11.925296  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:11.925327  271328 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:51:11.925325  271328 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:51:11.925373  271328 cache.go:56] Caching tarball of preloaded images
	I0813 20:51:11.925616  271328 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:51:11.925640  271328 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:51:11.925773  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:11.925807  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json: {Name:mk3876305492e8ad5450e3976660c9fa1c973e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.029343  271328 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:51:12.029375  271328 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:51:12.029391  271328 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:51:12.029434  271328 start.go:313] acquiring machines lock for auto-20210813204009-13784: {Name:mkd0aba803bc7694302f970fb956ac46569643dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:51:12.029622  271328 start.go:317] acquired machines lock for "auto-20210813204009-13784" in 163.616µs
	I0813 20:51:12.029653  271328 start.go:89] Provisioning new machine with config: &{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:12.029748  271328 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:51:11.473988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:11.474018  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:11.573472  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:11.573526  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:11.658988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.659019  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:11.685635  264876 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.988027  264876 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813204926-13784"
	I0813 20:51:12.521134  264876 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:12.521160  264876 addons.go:344] enableAddons completed in 2.029586792s
	I0813 20:51:12.583342  264876 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:12.585304  264876 out.go:177] 
	W0813 20:51:12.585562  264876 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:12.587605  264876 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:12.589196  264876 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813204926-13784" cluster and "default" namespace by default
	I0813 20:51:08.546768  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.046384  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.546599  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.046701  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.546641  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.046329  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.546622  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.046214  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.546737  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.666694  233224 kubeadm.go:985] duration metric: took 12.281927379s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:12.666726  233224 kubeadm.go:392] StartCluster complete in 5m41.350158589s
	I0813 20:51:12.666746  233224 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.666841  233224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:12.669323  233224 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:13.198236  233224 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813204216-13784" rescaled to 1
	I0813 20:51:13.198297  233224 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:51:13.198331  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:13.200510  233224 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:13.198427  233224 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:13.200649  233224 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200666  233224 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200671  233224 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:13.200686  233224 addons.go:59] Setting dashboard=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.198561  233224 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:13.200707  233224 addons.go:135] Setting addon dashboard=true in "no-preload-20210813204216-13784"
	I0813 20:51:13.200710  233224 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813204216-13784"
	W0813 20:51:13.200714  233224 addons.go:147] addon dashboard should already be in state true
	I0813 20:51:13.200722  233224 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200733  233224 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:13.200743  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200748  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200588  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:13.200700  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200713  233224 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200905  233224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813204216-13784"
	I0813 20:51:13.201200  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201286  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201320  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201369  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.268820  233224 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.268850  233224 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:13.268885  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.269529  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.272105  233224 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.272280  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:13.276915  233224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:13.275633  233224 node_ready.go:49] node "no-preload-20210813204216-13784" has status "Ready":"True"
	I0813 20:51:13.277012  233224 node_ready.go:38] duration metric: took 4.87652ms waiting for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.277035  233224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:13.277050  233224 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.277062  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:13.277114  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.280067  233224 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.282273  233224 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:13.282349  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:13.282360  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:13.282428  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.288178  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:13.302483  233224 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.302581  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:13.302600  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:13.302672  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.364847  233224 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.364873  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:13.364933  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.394311  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.422036  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.432725  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.457704  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.517628  233224 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 20:51:13.528393  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.620168  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:13.620195  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:13.671071  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.681321  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:13.681356  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:13.689240  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:13.689265  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:13.774865  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:13.774905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:13.862937  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:13.862968  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:13.866582  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:13.866605  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:13.965927  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:13.965951  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:13.986024  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:14.070287  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:14.070319  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:14.189473  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:14.189565  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:14.364541  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:14.364569  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:14.492877  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:14.492905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:14.596170  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:14.596202  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:14.663166  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.134726824s)
	I0813 20:51:14.669029  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:15.296512  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.310389487s)
	I0813 20:51:15.296557  233224 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813204216-13784"
	I0813 20:51:15.375159  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.190525  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.521448806s)
	I0813 20:51:12.032028  271328 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:51:12.032292  271328 start.go:160] libmachine.API.Create for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:12.032325  271328 client.go:168] LocalClient.Create starting
	I0813 20:51:12.032388  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:51:12.032418  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032440  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032571  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:51:12.032593  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032613  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032954  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:51:12.084329  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:51:12.084421  271328 network_create.go:255] running [docker network inspect auto-20210813204009-13784] to gather additional debugging logs...
	I0813 20:51:12.084441  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784
	W0813 20:51:12.129703  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 returned with exit code 1
	I0813 20:51:12.129740  271328 network_create.go:258] error running [docker network inspect auto-20210813204009-13784]: docker network inspect auto-20210813204009-13784: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210813204009-13784
	I0813 20:51:12.129756  271328 network_create.go:260] output of [docker network inspect auto-20210813204009-13784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210813204009-13784
	
	** /stderr **
	I0813 20:51:12.129811  271328 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:12.181560  271328 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-e58530d1cbfd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c3:d4:16:b0}}
	I0813 20:51:12.182554  271328 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0003f6078] misses:0}
	I0813 20:51:12.182616  271328 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:51:12.182634  271328 network_create.go:106] attempt to create docker network auto-20210813204009-13784 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:51:12.182698  271328 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210813204009-13784
	I0813 20:51:12.265555  271328 network_create.go:90] docker network auto-20210813204009-13784 192.168.58.0/24 created
	I0813 20:51:12.265592  271328 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20210813204009-13784" container
	I0813 20:51:12.265659  271328 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:51:12.325195  271328 cli_runner.go:115] Run: docker volume create auto-20210813204009-13784 --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:51:12.375214  271328 oci.go:102] Successfully created a docker volume auto-20210813204009-13784
	I0813 20:51:12.375313  271328 cli_runner.go:115] Run: docker run --rm --name auto-20210813204009-13784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --entrypoint /usr/bin/test -v auto-20210813204009-13784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:51:13.255475  271328 oci.go:106] Successfully prepared a docker volume auto-20210813204009-13784
	W0813 20:51:13.255535  271328 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:51:13.255544  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:51:13.255605  271328 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:51:13.255907  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:13.255936  271328 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:51:13.256015  271328 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:51:13.443619  271328 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210813204009-13784 --name auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210813204009-13784 --network auto-20210813204009-13784 --ip 192.168.58.2 --volume auto-20210813204009-13784:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:51:14.118301  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Running}}
	I0813 20:51:14.185140  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:14.236626  271328 cli_runner.go:115] Run: docker exec auto-20210813204009-13784 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:51:14.394377  271328 oci.go:278] the created container "auto-20210813204009-13784" has a running status.
	I0813 20:51:14.394412  271328 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa...
	I0813 20:51:14.559698  271328 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:51:14.962022  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:15.017995  271328 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:51:15.018017  271328 kic_runner.go:115] Args: [docker exec --privileged auto-20210813204009-13784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:51:16.192846  233224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:16.192890  233224 addons.go:344] enableAddons completed in 2.994475177s
	I0813 20:51:17.804083  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.801657  271328 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.545595555s)
	I0813 20:51:17.801693  271328 kic.go:188] duration metric: took 4.545754 seconds to extract preloaded images to volume
	I0813 20:51:17.801770  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:17.842060  271328 machine.go:88] provisioning docker machine ...
	I0813 20:51:17.842103  271328 ubuntu.go:169] provisioning hostname "auto-20210813204009-13784"
	I0813 20:51:17.842167  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:17.880732  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:17.880934  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:17.880952  271328 main.go:130] libmachine: About to run SSH command:
	sudo hostname auto-20210813204009-13784 && echo "auto-20210813204009-13784" | sudo tee /etc/hostname
	I0813 20:51:18.049279  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: auto-20210813204009-13784
	
	I0813 20:51:18.049355  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.089070  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.089215  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.089233  271328 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210813204009-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210813204009-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210813204009-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:51:18.214361  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:51:18.214400  271328 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:51:18.214423  271328 ubuntu.go:177] setting up certificates
	I0813 20:51:18.214435  271328 provision.go:83] configureAuth start
	I0813 20:51:18.214499  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:18.257160  271328 provision.go:138] copyHostCerts
	I0813 20:51:18.257225  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:51:18.257232  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:51:18.257274  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:51:18.257345  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:51:18.257355  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:51:18.257373  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:51:18.257422  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:51:18.257430  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:51:18.257445  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:51:18.257520  271328 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.auto-20210813204009-13784 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210813204009-13784]
	I0813 20:51:18.405685  271328 provision.go:172] copyRemoteCerts
	I0813 20:51:18.405745  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:51:18.405785  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.445891  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:18.536412  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:51:18.553289  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0813 20:51:18.568793  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:51:18.583774  271328 provision.go:86] duration metric: configureAuth took 369.326679ms
	I0813 20:51:18.583798  271328 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:51:18.583946  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:18.584072  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.627524  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.627677  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.627697  271328 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:51:19.012135  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:51:19.012167  271328 machine.go:91] provisioned docker machine in 1.170081385s
	I0813 20:51:19.012178  271328 client.go:171] LocalClient.Create took 6.979844019s
	I0813 20:51:19.012195  271328 start.go:168] duration metric: libmachine.API.Create for "auto-20210813204009-13784" took 6.979905282s
	I0813 20:51:19.012204  271328 start.go:267] post-start starting for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:19.012215  271328 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:51:19.012274  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:51:19.012321  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.051463  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.148765  271328 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:51:19.151322  271328 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:51:19.151341  271328 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:51:19.151349  271328 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:51:19.151355  271328 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:51:19.151364  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:51:19.151409  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:51:19.151507  271328 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:51:19.151607  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:51:19.158200  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:19.176073  271328 start.go:270] post-start completed in 163.849198ms
	I0813 20:51:19.176519  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.224022  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:19.224268  271328 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:51:19.224328  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.265461  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.357703  271328 start.go:129] duration metric: createHost completed in 7.327939716s
	I0813 20:51:19.357731  271328 start.go:80] releasing machines lock for "auto-20210813204009-13784", held for 7.328093299s
	I0813 20:51:19.357829  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.403591  271328 ssh_runner.go:149] Run: systemctl --version
	I0813 20:51:19.403631  271328 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:51:19.403663  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.403725  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.454924  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.455089  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.690299  271328 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:51:19.711263  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:51:19.720449  271328 docker.go:153] disabling docker service ...
	I0813 20:51:19.720510  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:51:19.729566  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:51:19.738541  271328 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:51:19.809055  271328 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:51:19.878138  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:51:19.887210  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:51:19.901071  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.909825  271328 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:51:19.909855  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.918547  271328 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:51:19.925341  271328 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:51:19.925401  271328 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:51:19.932883  271328 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:51:19.939083  271328 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:51:20.008572  271328 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:51:20.019341  271328 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:51:20.019407  271328 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:51:20.022897  271328 start.go:413] Will wait 60s for crictl version
	I0813 20:51:20.022952  271328 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:51:20.049207  271328 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:51:20.049276  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.118062  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.185186  271328 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:51:20.185268  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:20.231193  271328 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:51:20.234527  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.243481  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:20.243537  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.298894  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.298920  271328 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:51:20.298967  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.326049  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.326070  271328 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:51:20.326138  271328 ssh_runner.go:149] Run: crio config
	I0813 20:51:20.405222  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:20.405254  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:20.405269  271328 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:51:20.405286  271328 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210813204009-13784 NodeName:auto-20210813204009-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:51:20.405450  271328 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "auto-20210813204009-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:51:20.406210  271328 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=auto-20210813204009-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:51:20.406291  271328 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:51:20.414073  271328 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:51:20.414143  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:51:20.420611  271328 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (556 bytes)
	I0813 20:51:20.432233  271328 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:51:20.443622  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0813 20:51:20.454650  271328 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:51:20.457221  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.467941  271328 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784 for IP: 192.168.58.2
	I0813 20:51:20.467993  271328 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:51:20.468013  271328 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:51:20.468073  271328 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key
	I0813 20:51:20.468084  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt with IP's: []
	I0813 20:51:20.834054  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt ...
	I0813 20:51:20.834092  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: {Name:mk7fec601fb1fafe5c23646db0e11a54596e8bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834267  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key ...
	I0813 20:51:20.834281  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key: {Name:mk1cae1776891d9f945556a388916d00049fb0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834361  271328 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041
	I0813 20:51:20.834373  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:51:21.063423  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 ...
	I0813 20:51:21.063459  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041: {Name:mk251c4f0d507b09ef6d31c1707428420ec85197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065611  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 ...
	I0813 20:51:21.065633  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041: {Name:mk4d38dae507bc9d1c850061ba3bdb1c6e2ca7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065723  271328 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt
	I0813 20:51:21.065806  271328 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key
	I0813 20:51:21.065871  271328 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key
	I0813 20:51:21.065883  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt with IP's: []
	I0813 20:51:21.152453  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt ...
	I0813 20:51:21.152481  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt: {Name:mke5a626b5b050e50bb47e400c3bba4f5fb88778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152637  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key ...
	I0813 20:51:21.152650  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key: {Name:mkb2a71eb086a15771297e8ab11e852569412fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152807  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:51:21.152843  271328 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:51:21.152855  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:51:21.152880  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:51:21.152909  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:51:21.152931  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:51:21.152971  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:21.153904  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:51:21.171484  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:51:21.187960  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:51:21.205911  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:51:21.223614  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:51:21.239905  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:51:21.255368  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:51:21.271028  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:51:21.286769  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:51:21.302428  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:51:21.317590  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:51:21.336580  271328 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:51:21.355880  271328 ssh_runner.go:149] Run: openssl version
	I0813 20:51:21.361210  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:51:21.368318  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371245  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371283  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.376426  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:51:21.384634  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:51:21.392048  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395072  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395113  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.400410  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:51:21.408727  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:51:21.415718  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418881  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418923  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.423802  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:51:21.431770  271328 kubeadm.go:390] StartCluster: {Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:21.431861  271328 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:51:21.431914  271328 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:21.455876  271328 cri.go:76] found id: ""
	I0813 20:51:21.455927  271328 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:51:21.463196  271328 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:21.471334  271328 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:21.471384  271328 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:21.478565  271328 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:21.478610  271328 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:18.862764  233224 pod_ready.go:92] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.862797  233224 pod_ready.go:81] duration metric: took 5.574582513s waiting for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.862817  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867642  233224 pod_ready.go:92] pod "coredns-78fcd69978-kbf57" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.867658  233224 pod_ready.go:81] duration metric: took 4.833167ms waiting for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867668  233224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:20.879817  233224 pod_ready.go:102] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:21.378531  233224 pod_ready.go:92] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.378554  233224 pod_ready.go:81] duration metric: took 2.510878118s waiting for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.378572  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382866  233224 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.382882  233224 pod_ready.go:81] duration metric: took 4.296091ms waiting for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382892  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386782  233224 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.386801  233224 pod_ready.go:81] duration metric: took 3.90189ms waiting for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386813  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390480  233224 pod_ready.go:92] pod "kube-proxy-vf22v" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.390494  233224 pod_ready.go:81] duration metric: took 3.672888ms waiting for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390501  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604404  233224 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.604433  233224 pod_ready.go:81] duration metric: took 213.923321ms waiting for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604445  233224 pod_ready.go:38] duration metric: took 8.327391702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:21.604469  233224 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:51:21.604523  233224 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:21.685434  233224 api_server.go:70] duration metric: took 8.487094951s to wait for apiserver process to appear ...
	I0813 20:51:21.685459  233224 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:51:21.685471  233224 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:51:21.691084  233224 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:51:21.691907  233224 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:51:21.691929  233224 api_server.go:129] duration metric: took 6.463677ms to wait for apiserver health ...
	I0813 20:51:21.691939  233224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:51:21.806833  233224 system_pods.go:59] 10 kube-system pods found
	I0813 20:51:21.806865  233224 system_pods.go:61] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:21.806872  233224 system_pods.go:61] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:21.806878  233224 system_pods.go:61] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:21.806884  233224 system_pods.go:61] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:21.806890  233224 system_pods.go:61] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:21.806897  233224 system_pods.go:61] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:21.806903  233224 system_pods.go:61] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:21.806909  233224 system_pods.go:61] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:21.806921  233224 system_pods.go:61] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:21.806947  233224 system_pods.go:61] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:21.806955  233224 system_pods.go:74] duration metric: took 115.009603ms to wait for pod list to return data ...
	I0813 20:51:21.806968  233224 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:51:22.003355  233224 default_sa.go:45] found service account: "default"
	I0813 20:51:22.003384  233224 default_sa.go:55] duration metric: took 196.403211ms for default service account to be created ...
	I0813 20:51:22.003397  233224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:51:22.206326  233224 system_pods.go:86] 10 kube-system pods found
	I0813 20:51:22.206359  233224 system_pods.go:89] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:22.206368  233224 system_pods.go:89] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:22.206376  233224 system_pods.go:89] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:22.206382  233224 system_pods.go:89] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:22.206390  233224 system_pods.go:89] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:22.206398  233224 system_pods.go:89] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:22.206407  233224 system_pods.go:89] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:22.206414  233224 system_pods.go:89] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:22.206428  233224 system_pods.go:89] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:22.206438  233224 system_pods.go:89] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:22.206451  233224 system_pods.go:126] duration metric: took 203.046705ms to wait for k8s-apps to be running ...
	I0813 20:51:22.206463  233224 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:51:22.206511  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:22.263444  233224 system_svc.go:56] duration metric: took 56.96766ms WaitForService to wait for kubelet.
	I0813 20:51:22.263482  233224 kubeadm.go:547] duration metric: took 9.065148102s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:51:22.263519  233224 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:51:22.403039  233224 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:51:22.403065  233224 node_conditions.go:123] node cpu capacity is 8
	I0813 20:51:22.403081  233224 node_conditions.go:105] duration metric: took 139.554694ms to run NodePressure ...
	I0813 20:51:22.403096  233224 start.go:231] waiting for startup goroutines ...
	I0813 20:51:22.450275  233224 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:22.455408  233224 out.go:177] 
	W0813 20:51:22.455568  233224 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:22.462541  233224 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:22.464230  233224 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813204216-13784" cluster and "default" namespace by default
	I0813 20:51:21.794120  271328 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:25.163675  271328 out.go:204]   - Booting up control plane ...
	I0813 20:51:28.722579  240241 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.346300289s)
	I0813 20:51:28.722667  240241 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:28.732254  240241 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:28.732318  240241 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:28.757337  240241 cri.go:76] found id: ""
	I0813 20:51:28.757392  240241 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:28.764551  240241 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:28.764599  240241 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:28.771196  240241 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:28.771247  240241 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:29.067432  240241 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:29.947085  240241 out.go:204]   - Booting up control plane ...
	I0813 20:51:40.720555  271328 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:41.136233  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:41.136257  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:41.138470  271328 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:41.138531  271328 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:41.142093  271328 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:41.142114  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:41.159919  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:43.999786  240241 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:44.412673  240241 cni.go:93] Creating CNI manager for ""
	I0813 20:51:44.412698  240241 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:44.414497  240241 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:44.414556  240241 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:44.418236  240241 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:44.418253  240241 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:44.430863  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:41.568473  271328 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:41.568595  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:41.568620  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=auto-20210813204009-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:41.684391  271328 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:41.684482  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:42.252918  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:42.753184  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:43.253340  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:43.752498  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.252543  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.752811  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.253371  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.753399  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.252813  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.663289  240241 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:44.663354  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813204407-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_44_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.663359  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.785476  240241 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:44.785625  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.361034  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.860496  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.360813  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.861457  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.360900  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.860847  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.361284  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.860717  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.361233  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.753324  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.622147  271328 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.868786003s)
	I0813 20:51:48.753354  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.860593  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.861309  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.361330  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.860839  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.360530  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.261881  271328 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.5084884s)
	I0813 20:51:52.752569  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.253464  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.753088  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.252748  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.752605  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.253338  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.752990  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.253395  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.860519  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.360704  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.861401  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.360874  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.861184  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.935142  240241 kubeadm.go:985] duration metric: took 12.271847359s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:56.935173  240241 kubeadm.go:392] StartCluster complete in 5m59.56574911s
	I0813 20:51:56.935192  240241 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:56.935280  240241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:56.936618  240241 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.471369  240241 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813204407-13784" rescaled to 1
	I0813 20:51:57.471434  240241 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:57.473147  240241 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:57.473200  240241 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:57.471473  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:57.471495  240241 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:57.473309  240241 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473332  240241 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473329  240241 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.473341  240241 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:57.473359  240241 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473373  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.471677  240241 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:57.473389  240241 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473397  240241 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473415  240241 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473418  240241 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.473375  240241 addons.go:147] addon dashboard should already be in state true
	W0813 20:51:57.473430  240241 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:57.473453  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.473469  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.473755  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473923  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473970  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473984  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.500075  240241 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813204407-13784" to be "Ready" ...
	I0813 20:51:57.508390  240241 node_ready.go:49] node "default-k8s-different-port-20210813204407-13784" has status "Ready":"True"
	I0813 20:51:57.508412  240241 node_ready.go:38] duration metric: took 8.303909ms waiting for node "default-k8s-different-port-20210813204407-13784" to be "Ready" ...
	I0813 20:51:57.508425  240241 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:57.530074  240241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:57.559993  240241 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:57.561443  240241 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:56.753159  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.252816  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.323178  271328 kubeadm.go:985] duration metric: took 15.754657804s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:57.323205  271328 kubeadm.go:392] StartCluster complete in 35.891441868s
	I0813 20:51:57.323233  271328 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.323334  271328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:57.325280  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.844496  271328 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20210813204009-13784" rescaled to 1
	I0813 20:51:57.844542  271328 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:57.847125  271328 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:57.847179  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:57.844600  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:57.844628  271328 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:51:57.844773  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:57.847273  271328 addons.go:59] Setting storage-provisioner=true in profile "auto-20210813204009-13784"
	I0813 20:51:57.847289  271328 addons.go:135] Setting addon storage-provisioner=true in "auto-20210813204009-13784"
	W0813 20:51:57.847298  271328 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:57.847304  271328 addons.go:59] Setting default-storageclass=true in profile "auto-20210813204009-13784"
	I0813 20:51:57.847325  271328 host.go:66] Checking if "auto-20210813204009-13784" exists ...
	I0813 20:51:57.847330  271328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20210813204009-13784"
	I0813 20:51:57.847657  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.847848  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.914584  271328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:57.914695  271328 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.914708  271328 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:57.914767  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:57.926636  271328 addons.go:135] Setting addon default-storageclass=true in "auto-20210813204009-13784"
	W0813 20:51:57.926670  271328 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:57.926704  271328 host.go:66] Checking if "auto-20210813204009-13784" exists ...
	I0813 20:51:57.927086  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.944440  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:57.946970  271328 node_ready.go:35] waiting up to 5m0s for node "auto-20210813204009-13784" to be "Ready" ...
	I0813 20:51:57.951330  271328 node_ready.go:49] node "auto-20210813204009-13784" has status "Ready":"True"
	I0813 20:51:57.951353  271328 node_ready.go:38] duration metric: took 4.355543ms waiting for node "auto-20210813204009-13784" to be "Ready" ...
	I0813 20:51:57.951367  271328 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:57.964918  271328 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:57.974587  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:57.995812  271328 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:57.995845  271328 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:57.995903  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:58.104226  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:58.127261  271328 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:58.207306  271328 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:58.318052  271328 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 20:51:57.560121  240241 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.562962  240241 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:57.563043  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:57.563058  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:57.563087  240241 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:57.563122  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.563145  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:57.563156  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:57.563204  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.563285  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:57.563317  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.585350  240241 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.585389  240241 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:57.585423  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.586491  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.640285  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.643118  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.651320  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.655597  240241 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:57.655617  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:57.655661  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.659397  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:57.708263  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.772822  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:57.772851  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:57.775665  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:57.775686  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:57.778938  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.866896  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:57.866921  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:57.875909  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:57.875935  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:57.895465  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:57.895493  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:57.906579  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:57.906602  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:57.958953  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:57.977795  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:57.977819  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:57.988125  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:58.065141  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:58.065163  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:58.173899  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:58.173923  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:58.280880  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:58.280914  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:58.289511  240241 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0813 20:51:58.375994  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:58.376079  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:58.488006  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:58.488037  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:58.562447  240241 pod_ready.go:97] error getting pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-94bmz" not found
	I0813 20:51:58.562481  240241 pod_ready.go:81] duration metric: took 1.032368127s waiting for pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:58.562494  240241 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-94bmz" not found
	I0813 20:51:58.562502  240241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:58.578755  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:59.569598  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.79061998s)
	I0813 20:51:59.658034  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69903678s)
	I0813 20:51:59.658141  240241 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:59.658099  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.669942348s)
	I0813 20:52:00.558702  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.979881854s)
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:45:25 UTC, end at Fri 2021-08-13 20:52:01 UTC. --
	Aug 13 20:51:24 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:24.739059632Z" level=info msg="Starting container: 63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a" id=8dd4b597-4232-4569-9bbe-745bc92931c4 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:24 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:24.763362871Z" level=info msg="Started container 63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=8dd4b597-4232-4569-9bbe-745bc92931c4 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:25 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:25.469811075Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6\""
	Aug 13 20:51:25 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:25.593757274Z" level=info msg="Removing container: 32f8b2ec2a162f7e694c50fddf1c50cec97b930bc29222438342d431f0732906" id=ecc90c0d-c960-45d5-9603-d62c3c0c3b64 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:51:25 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:25.631090728Z" level=info msg="Removed container 32f8b2ec2a162f7e694c50fddf1c50cec97b930bc29222438342d431f0732906: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=ecc90c0d-c960-45d5-9603-d62c3c0c3b64 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:51:28 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:28.332879467Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=4f266af7-d513-47d5-9de6-c6b47fe39876 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:28 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:28.333108141Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=4f266af7-d513-47d5-9de6-c6b47fe39876 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.196039456Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f" id=9f046245-d391-450e-816e-350bc64fd5c3 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.196664039Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=8d3671c9-49cf-4b09-8ebe-86c7f9a71781 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.196920388Z" level=info msg="Checking image status: kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6" id=35d8449f-df6d-4bdc-9209-8b21a0577d28 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.197930087Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,RepoTags:[docker.io/kubernetesui/dashboard:v2.1.0],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 docker.io/kubernetesui/dashboard@sha256:8cd877c1c0909bdd50043edc18b89cfbbf0614a57893ebf59b6bd1ddb5419323],Size_:228529574,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=35d8449f-df6d-4bdc-9209-8b21a0577d28 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.198767461Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-n9kxj/kubernetes-dashboard" id=c0246d7c-33e2-4a85-97af-cfe87016af30 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.207808407Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.212544825Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3cff0b389158bb5aa0d351ee21ddb115f87b84094a2f45df1e170cfc8f9a5736/merged/etc/group: no such file or directory"
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.375229032Z" level=info msg="Created container ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-n9kxj/kubernetes-dashboard" id=c0246d7c-33e2-4a85-97af-cfe87016af30 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.375789685Z" level=info msg="Starting container: ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c" id=6ccaf67b-f304-4613-b03b-6e99d256e454 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.388381078Z" level=info msg="Started container ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-n9kxj/kubernetes-dashboard" id=6ccaf67b-f304-4613-b03b-6e99d256e454 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.333757365Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=689d0c7c-a487-4d7a-ae3e-a5c5f8a465b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.335582064Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=689d0c7c-a487-4d7a-ae3e-a5c5f8a465b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.336143561Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=ae6a6bc5-03f6-4e12-8704-bf24f37c6fd9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.337882718Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ae6a6bc5-03f6-4e12-8704-bf24f37c6fd9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.338606495Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=844e0fe8-7acb-4d79-8ea4-b4b4c2899d56 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.532932565Z" level=info msg="Created container acd231be03d703151899011383876b333c8c8a75608cec92be237de868e25638: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=844e0fe8-7acb-4d79-8ea4-b4b4c2899d56 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.533542803Z" level=info msg="Starting container: acd231be03d703151899011383876b333c8c8a75608cec92be237de868e25638" id=cc7bc5c6-0fea-4842-80ba-6a17793a2a44 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.562802999Z" level=info msg="Started container acd231be03d703151899011383876b333c8c8a75608cec92be237de868e25638: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=cc7bc5c6-0fea-4842-80ba-6a17793a2a44 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID
	acd231be03d70       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   af7720ab8bfd7
	ee382c43816e2       docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f   31 seconds ago       Running             kubernetes-dashboard        0                   a9225630819e9
	63e63313a3f33       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           36 seconds ago       Exited              dashboard-metrics-scraper   1                   af7720ab8bfd7
	78940ce7ea25e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           45 seconds ago       Exited              storage-provisioner         0                   6b55c696964ea
	da11f623096a4       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                           46 seconds ago       Running             kindnet-cni                 0                   8335645b41e5e
	504ce37aeae71       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c                                           46 seconds ago       Running             kube-proxy                  0                   dae9070ce3b8f
	9d61a3e5c94c9       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           46 seconds ago       Exited              coredns                     0                   ccf0f7dbb417e
	13346f79e0b1a       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           46 seconds ago       Running             coredns                     0                   9014d958b77af
	3575fcd4de6e5       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                                           About a minute ago   Running             kube-controller-manager     2                   9360a0eb0cf50
	b9a8b46e0a449       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a                                           About a minute ago   Running             kube-apiserver              2                   f4d48caa6942d
	2ed1a101fdc2d       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba                                           About a minute ago   Running             etcd                        2                   65162c4da1625
	aef6ebbbe8620       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75                                           About a minute ago   Running             kube-scheduler              2                   7452130766e66
	
	* 
	* ==> coredns [13346f79e0b1a0b1a696ae046ce04970accd7893fac6317c3f322cdb16028bd0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> coredns [9d61a3e5c94c90b74a98eb9e43ec6b96d9929d13e2bfc65bd3805f912a57a9e8] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +2.811832] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000019] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +0.204198] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +3.895437] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[ +12.031205] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000003] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +1.787836] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[ +14.060065] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth132654c8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d2 33 13 cb 90 7c 08 06        .......3...|..
	[  +0.492422] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0537654e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 56 dc 40 69 33 08 06        .......V.@i3..
	[Aug13 20:52] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth42b216bb
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 75 7c 88 de fd 08 06        .......u|.....
	[  +0.348033] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth3a91f4fd
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1e a0 d8 e2 a6 b4 08 06        ..............
	[  +7.435044] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +5.490524] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000025] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	
	* 
	* ==> etcd [2ed1a101fdc2de06b7ccca482af6efe456ca1dfd0df0793a7496d57e53a5d09a] <==
	* {"level":"info","ts":"2021-08-13T20:50:53.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-08-13T20:50:53.058Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-08-13T20:50:53.060Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-13T20:50:53.061Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-08-13T20:50:53.061Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-08-13T20:50:53.061Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-13T20:50:53.061Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:no-preload-20210813204216-13784 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T20:50:53.495Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-13T20:50:53.495Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:52:15 up  1:34,  0 users,  load average: 4.16, 2.84, 2.32
	Linux no-preload-20210813204216-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [b9a8b46e0a44906e80af7b7fe48165ba7a021fdc512db2d3a043f636e20feb0e] <==
	* W0813 20:52:12.680113       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:14.564310       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:14.564322       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.376972       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.380207       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.391682       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.391691       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.476125       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.476172       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.486520       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.591073       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.668917       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.668925       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.691556       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0813 20:52:15.692673       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	I0813 20:52:15.878414       1 trace.go:205] Trace[743731303]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (13-Aug-2021 20:52:01.572) (total time: 14305ms):
	Trace[743731303]: [14.305527385s] [14.305527385s] END
	I0813 20:52:15.878444       1 trace.go:205] Trace[557801478]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 20:51:45.963) (total time: 29915ms):
	Trace[557801478]: [29.915395746s] [29.915395746s] END
	E0813 20:52:15.878472       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc00d36ec00)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	E0813 20:52:15.878447       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc00d272300)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0813 20:52:15.878752       1 trace.go:205] Trace[1056420050]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:c2938e34-b9ff-472d-a81c-614faf645a4f,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 20:51:45.963) (total time: 29915ms):
	Trace[1056420050]: [29.915727927s] [29.915727927s] END
	I0813 20:52:15.879863       1 trace.go:205] Trace[2014768745]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:bedd2570-2d32-4299-858d-ef0cb20afb0d,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:52:01.572) (total time: 14307ms):
	Trace[2014768745]: [14.307011816s] [14.307011816s] END
	
	* 
	* ==> kube-controller-manager [3575fcd4de6e5072e4942339287b773ba8c67eda42193f2a9553e51c1bb336cd] <==
	* I0813 20:51:15.586337       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0813 20:51:15.591704       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:51:15.591753       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.596584       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:51:15.670856       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:51:15.671430       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.671482       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.676600       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.676670       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.683001       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:51:15.683114       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.683745       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:51:15.683782       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.758408       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:51:15.759866       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.760333       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:51:15.760411       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.765196       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.765327       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:51:15.875234       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-pgwss"
	I0813 20:51:15.875273       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-n9kxj"
	I0813 20:51:17.374789       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0813 20:51:23.610869       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0813 20:51:42.583813       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:51:43.042798       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [504ce37aeae71f22a0f050a155ae8bec691ed8ce5e35f0c0518d3237d9826f88] <==
	* I0813 20:51:15.377419       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:51:15.377532       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:51:15.377560       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:51:15.493770       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:51:15.493898       1 server_others.go:212] Using iptables Proxier.
	I0813 20:51:15.493937       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:51:15.493980       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:51:15.494303       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:51:15.495466       1 config.go:315] Starting service config controller
	I0813 20:51:15.495492       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:51:15.495511       1 config.go:224] Starting endpoint slice config controller
	I0813 20:51:15.495514       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 20:51:15.558121       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210813204216-13784.169af8e47dd4fb36", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4b0dd84176e, ext:602586517, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210813204216-13784", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"no-preload-20210813204216-13784", UID:"no-preload-20210813204216-13784", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210813204216-13784.169af8e47dd4fb36" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:51:15.658079       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:51:15.658247       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [aef6ebbbe8620042fc604d68961705dd4b2b333af7aff651c0c15d0a72d455fe] <==
	* W0813 20:50:56.970218       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:50:56.970225       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:50:56.985131       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0813 20:50:56.985270       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 20:50:56.985302       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:50:56.985323       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0813 20:50:56.987239       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:50:56.987458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:50:56.987832       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:57.059267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:50:57.059343       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:50:57.061403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:50:57.061672       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:50:57.061818       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:57.062303       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:50:57.062596       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:50:57.062808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:50:57.062890       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:57.062940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:50:57.059511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:57.059768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:50:57.970062       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:58.063836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:50:58.195374       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0813 20:50:58.386017       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:45:25 UTC, end at Fri 2021-08-13 20:52:16 UTC. --
	Aug 13 20:51:23 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:23.082450    4314 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe861d02-23aa-4feb-a9f7-53652d9f9906-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe861d02-23aa-4feb-a9f7-53652d9f9906" (UID: "fe861d02-23aa-4feb-a9f7-53652d9f9906"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:51:23 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:23.109900    4314 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe861d02-23aa-4feb-a9f7-53652d9f9906-kube-api-access-5gr5l" (OuterVolumeSpecName: "kube-api-access-5gr5l") pod "fe861d02-23aa-4feb-a9f7-53652d9f9906" (UID: "fe861d02-23aa-4feb-a9f7-53652d9f9906"). InnerVolumeSpecName "kube-api-access-5gr5l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:51:23 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:23.183017    4314 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe861d02-23aa-4feb-a9f7-53652d9f9906-config-volume\") on node \"no-preload-20210813204216-13784\" DevicePath \"\""
	Aug 13 20:51:23 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:23.183057    4314 reconciler.go:319] "Volume detached for volume \"kube-api-access-5gr5l\" (UniqueName: \"kubernetes.io/projected/fe861d02-23aa-4feb-a9f7-53652d9f9906-kube-api-access-5gr5l\") on node \"no-preload-20210813204216-13784\" DevicePath \"\""
	Aug 13 20:51:24 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:24.588947    4314 scope.go:110] "RemoveContainer" containerID="32f8b2ec2a162f7e694c50fddf1c50cec97b930bc29222438342d431f0732906"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:25.334113    4314 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fe861d02-23aa-4feb-a9f7-53652d9f9906 path="/var/lib/kubelet/pods/fe861d02-23aa-4feb-a9f7-53652d9f9906/volumes"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: W0813 20:51:25.581348    4314 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:25.590470    4314 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/docker/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c\": RecentStats: unable to find data in memory cache], [\"/system.slice/crio-13346f79e0b1a0b1a696ae046ce04970accd7893fac6317c3f322cdb16028bd0.scope\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:25.591867    4314 scope.go:110] "RemoveContainer" containerID="32f8b2ec2a162f7e694c50fddf1c50cec97b930bc29222438342d431f0732906"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:25.592039    4314 scope.go:110] "RemoveContainer" containerID="63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:25.592395    4314 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-pgwss_kubernetes-dashboard(c9e941e0-4a98-447f-a0b8-e7c77da918f6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss" podUID=c9e941e0-4a98-447f-a0b8-e7c77da918f6
	Aug 13 20:51:26 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:26.594886    4314 scope.go:110] "RemoveContainer" containerID="63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	Aug 13 20:51:26 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:26.595173    4314 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-pgwss_kubernetes-dashboard(c9e941e0-4a98-447f-a0b8-e7c77da918f6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss" podUID=c9e941e0-4a98-447f-a0b8-e7c77da918f6
	Aug 13 20:51:27 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:27.596352    4314 scope.go:110] "RemoveContainer" containerID="63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	Aug 13 20:51:27 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:27.596593    4314 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-pgwss_kubernetes-dashboard(c9e941e0-4a98-447f-a0b8-e7c77da918f6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss" podUID=c9e941e0-4a98-447f-a0b8-e7c77da918f6
	Aug 13 20:51:30 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:30.212480    4314 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:51:30 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:30.212536    4314 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:51:30 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:30.212699    4314 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-64z55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-trj2k_kube-system(8f30b352-ee9a-4412-a279-83a5caa024bf): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:51:30 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:30.212749    4314 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-trj2k" podUID=8f30b352-ee9a-4412-a279-83a5caa024bf
	Aug 13 20:51:35 no-preload-20210813204216-13784 kubelet[4314]: W0813 20:51:35.612855    4314 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:51:35 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:35.615507    4314 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/docker/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:51:41 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:41.333121    4314 scope.go:110] "RemoveContainer" containerID="63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	Aug 13 20:51:41 no-preload-20210813204216-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:51:41 no-preload-20210813204216-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:51:41 no-preload-20210813204216-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c] <==
	* 2021/08/13 20:51:30 Using namespace: kubernetes-dashboard
	2021/08/13 20:51:30 Using in-cluster config to connect to apiserver
	2021/08/13 20:51:30 Using secret token for csrf signing
	2021/08/13 20:51:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:51:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:51:30 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/13 20:51:30 Generating JWE encryption key
	2021/08/13 20:51:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:51:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:51:30 Initializing JWE encryption key from synchronized object
	2021/08/13 20:51:30 Creating in-cluster Sidecar client
	2021/08/13 20:51:30 Serving insecurely on HTTP port: 9090
	2021/08/13 20:51:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:51:30 Starting overwatch
	
	* 
	* ==> storage-provisioner [78940ce7ea25ea4289edad9d81a0e04be6b4b00989fd85859af5794c53d789f6] <==
	* 	/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0001805a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0001fc780, 0x18e5530, 0xc0001a8580, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0003ba120)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0003ba120, 0x18b3d60, 0xc0003802d0, 0x1, 0xc000092300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003ba120, 0x3b9aca00, 0x0, 0x1, 0xc000092300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0003ba120, 0x3b9aca00, 0xc000092300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 121 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc0001a8300, 0xc000156000)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:52:15.883555  281188 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210813204216-13784
E0813 20:52:16.207382   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
helpers_test.go:236: (dbg) docker inspect no-preload-20210813204216-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c",
	        "Created": "2021-08-13T20:42:19.249594641Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233512,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:45:25.060722245Z",
	            "FinishedAt": "2021-08-13T20:45:22.690105537Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/hosts",
	        "LogPath": "/var/lib/docker/containers/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c-json.log",
	        "Name": "/no-preload-20210813204216-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210813204216-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210813204216-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/416ed91688eef7fd372daf2da68950783a07dfc25b0bfb3dce4d54b5cac624e6-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/416ed91688eef7fd372daf2da68950783a07dfc25b0bfb3dce4d54b5cac624e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/416ed91688eef7fd372daf2da68950783a07dfc25b0bfb3dce4d54b5cac624e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/416ed91688eef7fd372daf2da68950783a07dfc25b0bfb3dce4d54b5cac624e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210813204216-13784",
	                "Source": "/var/lib/docker/volumes/no-preload-20210813204216-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210813204216-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210813204216-13784",
	                "name.minikube.sigs.k8s.io": "no-preload-20210813204216-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a10346a8dda94d1eb351409cac8253f614953044c4be23fa2f3e796c6cec4a58",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32941"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32943"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32942"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a10346a8dda9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210813204216-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "472ef6c90d7b"
	                    ],
	                    "NetworkID": "e58530d1cbfdb6ce18e1d2e9fb761572954ee4ce5a9dfaf840d323eece84d305",
	                    "EndpointID": "7b186cffb514a2ef1a4e845fc6982cf4750375f257599b88cab47f93f5916ab7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204216-13784 -n no-preload-20210813204216-13784

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204216-13784 -n no-preload-20210813204216-13784: exit status 2 (15.779512684s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:52:31.991474  282478 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210813204216-13784 logs -n 25
E0813 20:52:32.426297   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:32.431576   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:32.441863   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:32.462644   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:32.503610   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:32.584516   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:32.745186   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:33.065428   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:33.706358   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:34.987313   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p no-preload-20210813204216-13784 logs -n 25: exit status 110 (1m0.995650731s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:21 UTC | Fri, 13 Aug 2021 20:49:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:50:24 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:24 UTC | Fri, 13 Aug 2021 20:50:25 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:25 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:50:50 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:00 UTC | Fri, 13 Aug 2021 20:51:00 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:03 UTC | Fri, 13 Aug 2021 20:51:03 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:05 UTC | Fri, 13 Aug 2021 20:51:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:06 UTC | Fri, 13 Aug 2021 20:51:10 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:11 UTC | Fri, 13 Aug 2021 20:51:11 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:51:12 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:13 UTC | Fri, 13 Aug 2021 20:51:13 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:51:22 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:41 UTC | Fri, 13 Aug 2021 20:51:41 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:52:08 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:18 UTC | Fri, 13 Aug 2021 20:52:19 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204407-13784            | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:21 UTC | Fri, 13 Aug 2021 20:52:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| start   | -p auto-20210813204009-13784                               | auto-20210813204009-13784                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:11 UTC | Fri, 13 Aug 2021 20:52:23 UTC |
	|         | --memory=2048                                              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                 |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	| ssh     | -p auto-20210813204009-13784                               | auto-20210813204009-13784                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:23 UTC | Fri, 13 Aug 2021 20:52:23 UTC |
	|         | pgrep -a kubelet                                           |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204407-13784            | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:23 UTC | Fri, 13 Aug 2021 20:52:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:24 UTC | Fri, 13 Aug 2021 20:52:28 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:28 UTC | Fri, 13 Aug 2021 20:52:29 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:52:29
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:52:29.245577  285040 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:29.245982  285040 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:29.245996  285040 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:29.246003  285040 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:29.246273  285040 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:29.246983  285040 out.go:305] Setting JSON to false
	I0813 20:52:29.282380  285040 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5712,"bootTime":1628882237,"procs":324,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:52:29.282480  285040 start.go:121] virtualization: kvm guest
	I0813 20:52:29.285051  285040 out.go:177] * [custom-weave-20210813204011-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:52:29.286854  285040 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:52:29.285197  285040 notify.go:169] Checking for updates...
	I0813 20:52:29.288441  285040 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:52:29.290061  285040 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:52:29.291541  285040 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:52:29.292044  285040 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:52:29.292178  285040 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:29.292281  285040 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:52:29.292323  285040 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:52:29.341315  285040 docker.go:132] docker version: linux-19.03.15
	I0813 20:52:29.341410  285040 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:29.425659  285040 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:29.379924874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:29.425755  285040 docker.go:244] overlay module found
	I0813 20:52:29.428151  285040 out.go:177] * Using the docker driver based on user configuration
	I0813 20:52:29.428178  285040 start.go:278] selected driver: docker
	I0813 20:52:29.428184  285040 start.go:751] validating driver "docker" against <nil>
	I0813 20:52:29.428206  285040 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:52:29.428294  285040 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:52:29.428320  285040 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:52:29.430035  285040 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:52:29.430864  285040 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:52:29.513068  285040 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:52:29.467446998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:52:29.513195  285040 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:52:29.513368  285040 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:52:29.513396  285040 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 20:52:29.513415  285040 start_flags.go:272] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0813 20:52:29.513434  285040 start_flags.go:277] config:
	{Name:custom-weave-20210813204011-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813204011-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:52:29.515770  285040 out.go:177] * Starting control plane node custom-weave-20210813204011-13784 in cluster custom-weave-20210813204011-13784
	I0813 20:52:29.515819  285040 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:52:29.517498  285040 out.go:177] * Pulling base image ...
	I0813 20:52:29.517548  285040 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:52:29.517575  285040 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:52:29.517588  285040 cache.go:56] Caching tarball of preloaded images
	I0813 20:52:29.517650  285040 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:52:29.517732  285040 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:52:29.517757  285040 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:52:29.517852  285040 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204011-13784/config.json ...
	I0813 20:52:29.517872  285040 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813204011-13784/config.json: {Name:mk250a0d6c03cf7a620fc997cd01897dc2b83c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:52:29.610720  285040 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:52:29.610755  285040 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:52:29.610773  285040 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:52:29.610811  285040 start.go:313] acquiring machines lock for custom-weave-20210813204011-13784: {Name:mkf63a27478c4c9c0d1c3a1e6788a46654aa3c2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:52:29.610932  285040 start.go:317] acquired machines lock for "custom-weave-20210813204011-13784" in 101.552µs
	I0813 20:52:29.610959  285040 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210813204011-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813204011-13784 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:52:29.611023  285040 start.go:126] createHost starting for "" (driver="docker")
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:45:25 UTC, end at Fri 2021-08-13 20:52:32 UTC. --
	Aug 13 20:51:24 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:24.739059632Z" level=info msg="Starting container: 63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a" id=8dd4b597-4232-4569-9bbe-745bc92931c4 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:24 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:24.763362871Z" level=info msg="Started container 63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=8dd4b597-4232-4569-9bbe-745bc92931c4 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:25 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:25.469811075Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6\""
	Aug 13 20:51:25 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:25.593757274Z" level=info msg="Removing container: 32f8b2ec2a162f7e694c50fddf1c50cec97b930bc29222438342d431f0732906" id=ecc90c0d-c960-45d5-9603-d62c3c0c3b64 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:51:25 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:25.631090728Z" level=info msg="Removed container 32f8b2ec2a162f7e694c50fddf1c50cec97b930bc29222438342d431f0732906: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=ecc90c0d-c960-45d5-9603-d62c3c0c3b64 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:51:28 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:28.332879467Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=4f266af7-d513-47d5-9de6-c6b47fe39876 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:28 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:28.333108141Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=4f266af7-d513-47d5-9de6-c6b47fe39876 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.196039456Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f" id=9f046245-d391-450e-816e-350bc64fd5c3 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.196664039Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=8d3671c9-49cf-4b09-8ebe-86c7f9a71781 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.196920388Z" level=info msg="Checking image status: kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6" id=35d8449f-df6d-4bdc-9209-8b21a0577d28 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.197930087Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db,RepoTags:[docker.io/kubernetesui/dashboard:v2.1.0],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 docker.io/kubernetesui/dashboard@sha256:8cd877c1c0909bdd50043edc18b89cfbbf0614a57893ebf59b6bd1ddb5419323],Size_:228529574,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=35d8449f-df6d-4bdc-9209-8b21a0577d28 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.198767461Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-n9kxj/kubernetes-dashboard" id=c0246d7c-33e2-4a85-97af-cfe87016af30 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.207808407Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.212544825Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3cff0b389158bb5aa0d351ee21ddb115f87b84094a2f45df1e170cfc8f9a5736/merged/etc/group: no such file or directory"
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.375229032Z" level=info msg="Created container ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-n9kxj/kubernetes-dashboard" id=c0246d7c-33e2-4a85-97af-cfe87016af30 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.375789685Z" level=info msg="Starting container: ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c" id=6ccaf67b-f304-4613-b03b-6e99d256e454 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:30 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:30.388381078Z" level=info msg="Started container ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-n9kxj/kubernetes-dashboard" id=6ccaf67b-f304-4613-b03b-6e99d256e454 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.333757365Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=689d0c7c-a487-4d7a-ae3e-a5c5f8a465b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.335582064Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=689d0c7c-a487-4d7a-ae3e-a5c5f8a465b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.336143561Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=ae6a6bc5-03f6-4e12-8704-bf24f37c6fd9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.337882718Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ae6a6bc5-03f6-4e12-8704-bf24f37c6fd9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.338606495Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=844e0fe8-7acb-4d79-8ea4-b4b4c2899d56 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.532932565Z" level=info msg="Created container acd231be03d703151899011383876b333c8c8a75608cec92be237de868e25638: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=844e0fe8-7acb-4d79-8ea4-b4b4c2899d56 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.533542803Z" level=info msg="Starting container: acd231be03d703151899011383876b333c8c8a75608cec92be237de868e25638" id=cc7bc5c6-0fea-4842-80ba-6a17793a2a44 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:51:41 no-preload-20210813204216-13784 crio[243]: time="2021-08-13 20:51:41.562802999Z" level=info msg="Started container acd231be03d703151899011383876b333c8c8a75608cec92be237de868e25638: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss/dashboard-metrics-scraper" id=cc7bc5c6-0fea-4842-80ba-6a17793a2a44 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID
	acd231be03d70       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           51 seconds ago       Exited              dashboard-metrics-scraper   2                   af7720ab8bfd7
	ee382c43816e2       docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f   About a minute ago   Running             kubernetes-dashboard        0                   a9225630819e9
	63e63313a3f33       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           About a minute ago   Exited              dashboard-metrics-scraper   1                   af7720ab8bfd7
	78940ce7ea25e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   6b55c696964ea
	da11f623096a4       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                           About a minute ago   Running             kindnet-cni                 0                   8335645b41e5e
	504ce37aeae71       ea6b13ed84e03cd9a7ca23694500dcb39ee3a02077048b998f6f89fb3b25323c                                           About a minute ago   Running             kube-proxy                  0                   dae9070ce3b8f
	9d61a3e5c94c9       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           About a minute ago   Exited              coredns                     0                   ccf0f7dbb417e
	13346f79e0b1a       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                           About a minute ago   Running             coredns                     0                   9014d958b77af
	3575fcd4de6e5       cf9cba6c3e4a874f0cb78ba84251e932259541f9b312dda7772092878eb9d25c                                           About a minute ago   Running             kube-controller-manager     2                   9360a0eb0cf50
	b9a8b46e0a449       b2462aa94d403bbf4a9d2b9b5e0089ed0c3613656c80dd291533dfe7426ffa2a                                           About a minute ago   Running             kube-apiserver              2                   f4d48caa6942d
	2ed1a101fdc2d       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba                                           About a minute ago   Running             etcd                        2                   65162c4da1625
	aef6ebbbe8620       7da2efaa5b48003fcc45fcfa72611821e30b7c49b063788656858d1e8f5a6a75                                           About a minute ago   Running             kube-scheduler              2                   7452130766e66
	
	* 
	* ==> coredns [13346f79e0b1a0b1a696ae046ce04970accd7893fac6317c3f322cdb16028bd0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> coredns [9d61a3e5c94c90b74a98eb9e43ec6b96d9929d13e2bfc65bd3805f912a57a9e8] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug13 20:52] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth42b216bb
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 75 7c 88 de fd 08 06        .......u|.....
	[  +0.348033] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth3a91f4fd
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1e a0 d8 e2 a6 b4 08 06        ..............
	[  +7.435044] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +5.490524] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000025] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +2.047860] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000002] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +4.034563] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd8e69602
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 93 4a 9f fb 2d 08 06        ........J..-..
	[  +3.841985] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth22aecc4f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 46 ba a1 61 ad 12 08 06        ......F..a....
	[  +7.179465] cgroup: cgroup2: unknown option "nsdelegate"
	[ +10.694631] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 20:53] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.450673] IPv4: martian source 10.32.0.1 from 10.32.0.1, on dev datapath
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fe aa d0 57 fc 42 08 06        .........W.B..
	[  +0.376936] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 a5 58 79 6e 14 08 06        ......B.Xyn...
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 42 a5 58 79 6e 14 08 06        ......B.Xyn...
	
	* 
	* ==> etcd [2ed1a101fdc2de06b7ccca482af6efe456ca1dfd0df0793a7496d57e53a5d09a] <==
	* {"level":"info","ts":"2021-08-13T20:50:53.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-08-13T20:50:53.058Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-08-13T20:50:53.060Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-13T20:50:53.061Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-08-13T20:50:53.061Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-08-13T20:50:53.061Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-13T20:50:53.061Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-08-13T20:50:53.492Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:no-preload-20210813204216-13784 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T20:50:53.493Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T20:50:53.495Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-13T20:50:53.495Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:53:32 up  1:36,  0 users,  load average: 4.22, 3.07, 2.44
	Linux no-preload-20210813204216-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [b9a8b46e0a44906e80af7b7fe48165ba7a021fdc512db2d3a043f636e20feb0e] <==
	* W0813 20:53:29.744304       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:29.814324       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:29.820420       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:29.823660       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:29.898937       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:29.914874       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:30.012252       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:30.108058       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:30.215047       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:30.240382       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:30.291673       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:30.357192       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:30.591591       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:32.218463       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:32.338718       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0813 20:53:32.372290       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0813 20:53:32.721099       1 trace.go:205] Trace[60693615]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (13-Aug-2021 20:52:32.720) (total time: 60000ms):
	Trace[60693615]: [1m0.000248867s] [1m0.000248867s] END
	E0813 20:53:32.721132       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0813 20:53:32.721259       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:53:32.722422       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:53:32.723576       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:53:32.725107       1 trace.go:205] Trace[432997630]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:3b171a0a-a521-4ad3-8629-cd1459435e10,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (13-Aug-2021 20:52:32.720) (total time: 60004ms):
	Trace[432997630]: [1m0.0042813s] [1m0.0042813s] END
	E0813 20:53:32.729012       1 timeout.go:135] post-timeout activity - time-elapsed: 7.708391ms, GET "/api/v1/nodes" result: <nil>
	
	* 
	* ==> kube-controller-manager [3575fcd4de6e5072e4942339287b773ba8c67eda42193f2a9553e51c1bb336cd] <==
	* I0813 20:51:15.586337       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0813 20:51:15.591704       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:51:15.591753       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.596584       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:51:15.670856       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:51:15.671430       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.671482       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.676600       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.676670       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.683001       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:51:15.683114       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.683745       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:51:15.683782       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.758408       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:51:15.759866       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.760333       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:51:15.760411       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:15.765196       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:15.765327       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:51:15.875234       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-pgwss"
	I0813 20:51:15.875273       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-n9kxj"
	I0813 20:51:17.374789       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0813 20:51:23.610869       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0813 20:51:42.583813       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 20:51:43.042798       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [504ce37aeae71f22a0f050a155ae8bec691ed8ce5e35f0c0518d3237d9826f88] <==
	* I0813 20:51:15.377419       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 20:51:15.377532       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 20:51:15.377560       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0813 20:51:15.493770       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:51:15.493898       1 server_others.go:212] Using iptables Proxier.
	I0813 20:51:15.493937       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:51:15.493980       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:51:15.494303       1 server.go:649] Version: v1.22.0-rc.0
	I0813 20:51:15.495466       1 config.go:315] Starting service config controller
	I0813 20:51:15.495492       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:51:15.495511       1 config.go:224] Starting endpoint slice config controller
	I0813 20:51:15.495514       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 20:51:15.558121       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210813204216-13784.169af8e47dd4fb36", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd4b0dd84176e, ext:602586517, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210813204216-13784", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"no-preload-20210813204216-13784", UID:"no-preload-20210813204216-13784", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210813204216-13784.169af8e47dd4fb36" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 20:51:15.658079       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:51:15.658247       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [aef6ebbbe8620042fc604d68961705dd4b2b333af7aff651c0c15d0a72d455fe] <==
	* W0813 20:50:56.970218       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:50:56.970225       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:50:56.985131       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0813 20:50:56.985270       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 20:50:56.985302       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:50:56.985323       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0813 20:50:56.987239       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:50:56.987458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:50:56.987832       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:57.059267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:50:57.059343       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:50:57.061403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:50:57.061672       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:50:57.061818       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:57.062303       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:50:57.062596       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:50:57.062808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 20:50:57.062890       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:57.062940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:50:57.059511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:57.059768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:50:57.970062       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:50:58.063836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:50:58.195374       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0813 20:50:58.386017       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:45:25 UTC, end at Fri 2021-08-13 20:53:32 UTC. --
	Aug 13 20:51:23 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:23.082450    4314 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe861d02-23aa-4feb-a9f7-53652d9f9906-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe861d02-23aa-4feb-a9f7-53652d9f9906" (UID: "fe861d02-23aa-4feb-a9f7-53652d9f9906"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 20:51:23 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:23.109900    4314 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe861d02-23aa-4feb-a9f7-53652d9f9906-kube-api-access-5gr5l" (OuterVolumeSpecName: "kube-api-access-5gr5l") pod "fe861d02-23aa-4feb-a9f7-53652d9f9906" (UID: "fe861d02-23aa-4feb-a9f7-53652d9f9906"). InnerVolumeSpecName "kube-api-access-5gr5l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 20:51:23 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:23.183017    4314 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe861d02-23aa-4feb-a9f7-53652d9f9906-config-volume\") on node \"no-preload-20210813204216-13784\" DevicePath \"\""
	Aug 13 20:51:23 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:23.183057    4314 reconciler.go:319] "Volume detached for volume \"kube-api-access-5gr5l\" (UniqueName: \"kubernetes.io/projected/fe861d02-23aa-4feb-a9f7-53652d9f9906-kube-api-access-5gr5l\") on node \"no-preload-20210813204216-13784\" DevicePath \"\""
	Aug 13 20:51:24 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:24.588947    4314 scope.go:110] "RemoveContainer" containerID="32f8b2ec2a162f7e694c50fddf1c50cec97b930bc29222438342d431f0732906"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:25.334113    4314 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fe861d02-23aa-4feb-a9f7-53652d9f9906 path="/var/lib/kubelet/pods/fe861d02-23aa-4feb-a9f7-53652d9f9906/volumes"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: W0813 20:51:25.581348    4314 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:25.590470    4314 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/docker/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c\": RecentStats: unable to find data in memory cache], [\"/system.slice/crio-13346f79e0b1a0b1a696ae046ce04970accd7893fac6317c3f322cdb16028bd0.scope\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:25.591867    4314 scope.go:110] "RemoveContainer" containerID="32f8b2ec2a162f7e694c50fddf1c50cec97b930bc29222438342d431f0732906"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:25.592039    4314 scope.go:110] "RemoveContainer" containerID="63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	Aug 13 20:51:25 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:25.592395    4314 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-pgwss_kubernetes-dashboard(c9e941e0-4a98-447f-a0b8-e7c77da918f6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss" podUID=c9e941e0-4a98-447f-a0b8-e7c77da918f6
	Aug 13 20:51:26 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:26.594886    4314 scope.go:110] "RemoveContainer" containerID="63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	Aug 13 20:51:26 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:26.595173    4314 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-pgwss_kubernetes-dashboard(c9e941e0-4a98-447f-a0b8-e7c77da918f6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss" podUID=c9e941e0-4a98-447f-a0b8-e7c77da918f6
	Aug 13 20:51:27 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:27.596352    4314 scope.go:110] "RemoveContainer" containerID="63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	Aug 13 20:51:27 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:27.596593    4314 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-pgwss_kubernetes-dashboard(c9e941e0-4a98-447f-a0b8-e7c77da918f6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-pgwss" podUID=c9e941e0-4a98-447f-a0b8-e7c77da918f6
	Aug 13 20:51:30 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:30.212480    4314 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:51:30 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:30.212536    4314 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:51:30 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:30.212699    4314 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-64z55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-trj2k_kube-system(8f30b352-ee9a-4412-a279-83a5caa024bf): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 13 20:51:30 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:30.212749    4314 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-trj2k" podUID=8f30b352-ee9a-4412-a279-83a5caa024bf
	Aug 13 20:51:35 no-preload-20210813204216-13784 kubelet[4314]: W0813 20:51:35.612855    4314 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 20:51:35 no-preload-20210813204216-13784 kubelet[4314]: E0813 20:51:35.615507    4314 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c/docker/472ef6c90d7b2eb883faa1cde20960e8c3cc3a748f04b0753ed0d3d734875d6c\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:51:41 no-preload-20210813204216-13784 kubelet[4314]: I0813 20:51:41.333121    4314 scope.go:110] "RemoveContainer" containerID="63e63313a3f33ba3e9850413cfadf7955ff18e0d5b9e69eaf8c683f7c851908a"
	Aug 13 20:51:41 no-preload-20210813204216-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:51:41 no-preload-20210813204216-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:51:41 no-preload-20210813204216-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [ee382c43816e26c7a40ce2bd9f4e7111a04806866dd36c8f0672e52049a6f84c] <==
	* 2021/08/13 20:51:30 Starting overwatch
	2021/08/13 20:51:30 Using namespace: kubernetes-dashboard
	2021/08/13 20:51:30 Using in-cluster config to connect to apiserver
	2021/08/13 20:51:30 Using secret token for csrf signing
	2021/08/13 20:51:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:51:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:51:30 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/13 20:51:30 Generating JWE encryption key
	2021/08/13 20:51:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:51:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:51:30 Initializing JWE encryption key from synchronized object
	2021/08/13 20:51:30 Creating in-cluster Sidecar client
	2021/08/13 20:51:30 Serving insecurely on HTTP port: 9090
	2021/08/13 20:51:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:22 Metric client health check failed: an error on the server ("unknown") has prevented the request from succeeding (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [78940ce7ea25ea4289edad9d81a0e04be6b4b00989fd85859af5794c53d789f6] <==
	* 	/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0001805a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0001fc780, 0x18e5530, 0xc0001a8580, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0003ba120)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0003ba120, 0x18b3d60, 0xc0003802d0, 0x1, 0xc000092300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003ba120, 0x3b9aca00, 0x0, 0x1, 0xc000092300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0003ba120, 0x3b9aca00, 0xc000092300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 121 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc0001a8300, 0xc000156000)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:53:32.725015  286141 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/no-preload/serial/Pause (111.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (5.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813204407-13784 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813204407-13784 --alsologtostderr -v=1: exit status 80 (1.957932085s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20210813204407-13784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:52:19.112324  282657 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:52:19.112534  282657 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:19.112544  282657 out.go:311] Setting ErrFile to fd 2...
	I0813 20:52:19.112547  282657 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:52:19.112644  282657 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:52:19.112825  282657 out.go:305] Setting JSON to false
	I0813 20:52:19.112848  282657 mustload.go:65] Loading cluster: default-k8s-different-port-20210813204407-13784
	I0813 20:52:19.113141  282657 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:52:19.113481  282657 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:52:19.152122  282657 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:52:19.152805  282657 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20210813204407-13784 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:52:19.155205  282657 out.go:177] * Pausing node default-k8s-different-port-20210813204407-13784 ... 
	I0813 20:52:19.155229  282657 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:52:19.155449  282657 ssh_runner.go:149] Run: systemctl --version
	I0813 20:52:19.155483  282657 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:52:19.193719  282657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:52:19.285609  282657 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:19.297396  282657 pause.go:50] kubelet running: true
	I0813 20:52:19.297451  282657 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:52:19.441012  282657 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:52:19.441087  282657 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:52:19.515718  282657 cri.go:76] found id: "0643fb501141e4daad159a56c590d69eaffa9d182a31d9e47c66b7dcb2be547b"
	I0813 20:52:19.515744  282657 cri.go:76] found id: "7bc03b2fb5b97ade5a5ced7d7239284d3648b6e56d26bae93cf0afb2762b8dd9"
	I0813 20:52:19.515749  282657 cri.go:76] found id: "ea2cef428a3288c41b5f0fff2df8ec259ada78d0561b68c1c7b7641097b488dd"
	I0813 20:52:19.515753  282657 cri.go:76] found id: "7cac3115278493ad19c4fc11545c81f5e1f16a562ffb0c986cb0422c970497c6"
	I0813 20:52:19.515757  282657 cri.go:76] found id: "723cb987e243addf3d24ee09648bd2c1d90f154b96b37e9dd97773224e44d0f9"
	I0813 20:52:19.515766  282657 cri.go:76] found id: "b889a4cfb9f98a0e1a75bc248fc206db415ef20ec427118e4f8f92c02d3ced22"
	I0813 20:52:19.515770  282657 cri.go:76] found id: "a82cc1cca9cca72dbbda29ae20add1371d75089232e162bebda4d7a93f4b7229"
	I0813 20:52:19.515775  282657 cri.go:76] found id: "18e273df7ef8068f15e66a27a5594ba380dcfee8d6092dc5991709cec2f3b326"
	I0813 20:52:19.515781  282657 cri.go:76] found id: "bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c"
	I0813 20:52:19.515791  282657 cri.go:76] found id: "eb7f966beddcab227c5a1f1a1ac25ac04ed015d0b9cfa75264ee8239b8c5db6a"
	I0813 20:52:19.515800  282657 cri.go:76] found id: ""
	I0813 20:52:19.515837  282657 ssh_runner.go:149] Run: sudo runc list -f json

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813204407-13784 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210813204407-13784
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210813204407-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7",
	        "Created": "2021-08-13T20:44:09.364516334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240625,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:45:51.386618589Z",
	            "FinishedAt": "2021-08-13T20:45:48.826035232Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/hosts",
	        "LogPath": "/var/lib/docker/containers/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7-json.log",
	        "Name": "/default-k8s-different-port-20210813204407-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210813204407-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210813204407-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/469150e9909f7d7be3943d1dd25ec247ae1d7312051fa04e07f493cf91ac43fe-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469150e9909f7d7be3943d1dd25ec247ae1d7312051fa04e07f493cf91ac43fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469150e9909f7d7be3943d1dd25ec247ae1d7312051fa04e07f493cf91ac43fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469150e9909f7d7be3943d1dd25ec247ae1d7312051fa04e07f493cf91ac43fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210813204407-13784",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210813204407-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210813204407-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210813204407-13784",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210813204407-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b314eca3090283c97d42b0839d69d25d9425bf11eccdc20c92da930fbc23fb6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32954"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32951"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32953"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32952"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b314eca3090",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210813204407-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "be304b8d02d7"
	                    ],
	                    "NetworkID": "41c8a2fb43b67b7fc56fa6f5352beb90f41ea5d6d822a84ea53583b7212324ae",
	                    "EndpointID": "bec5ebb771a21eed2629cbfabb709ddffd96156e8d4abfe6f0f99fdc9763db24",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784: exit status 2 (323.394976ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210813204407-13784 logs -n 25
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                                   | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:44 UTC | Fri, 13 Aug 2021 20:47:45 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:19 UTC | Fri, 13 Aug 2021 20:49:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:21 UTC | Fri, 13 Aug 2021 20:49:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:50:24 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:24 UTC | Fri, 13 Aug 2021 20:50:25 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:25 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:50:50 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:00 UTC | Fri, 13 Aug 2021 20:51:00 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:03 UTC | Fri, 13 Aug 2021 20:51:03 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:05 UTC | Fri, 13 Aug 2021 20:51:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:06 UTC | Fri, 13 Aug 2021 20:51:10 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:11 UTC | Fri, 13 Aug 2021 20:51:11 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:51:12 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:13 UTC | Fri, 13 Aug 2021 20:51:13 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:51:22 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:41 UTC | Fri, 13 Aug 2021 20:51:41 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:52:08 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:18 UTC | Fri, 13 Aug 2021 20:52:19 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:51:11
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:51:11.626877  271328 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:51:11.627052  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627060  271328 out.go:311] Setting ErrFile to fd 2...
	I0813 20:51:11.627064  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627159  271328 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:51:11.627409  271328 out.go:305] Setting JSON to false
	I0813 20:51:11.666661  271328 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5634,"bootTime":1628882237,"procs":328,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:51:11.666785  271328 start.go:121] virtualization: kvm guest
	I0813 20:51:11.669469  271328 out.go:177] * [auto-20210813204009-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:51:11.669645  271328 notify.go:169] Checking for updates...
	I0813 20:51:11.671319  271328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:11.672833  271328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:51:11.674351  271328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:51:11.675913  271328 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:51:11.676594  271328 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:11.676833  271328 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.676967  271328 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.677023  271328 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:51:11.731497  271328 docker.go:132] docker version: linux-19.03.15
	I0813 20:51:11.731582  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.824730  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.775305956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.824827  271328 docker.go:244] overlay module found
	I0813 20:51:11.826307  271328 out.go:177] * Using the docker driver based on user configuration
	I0813 20:51:11.826332  271328 start.go:278] selected driver: docker
	I0813 20:51:11.826337  271328 start.go:751] validating driver "docker" against <nil>
	I0813 20:51:11.826355  271328 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:51:11.826409  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:51:11.826435  271328 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:51:11.827724  271328 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:51:11.828584  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.921127  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.870452453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.921281  271328 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:51:11.921463  271328 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:51:11.921497  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:11.921506  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:11.921514  271328 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:51:11.921523  271328 start_flags.go:277] config:
	{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:11.924012  271328 out.go:177] * Starting control plane node auto-20210813204009-13784 in cluster auto-20210813204009-13784
	I0813 20:51:11.924056  271328 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:51:11.925270  271328 out.go:177] * Pulling base image ...
	I0813 20:51:11.925296  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:11.925327  271328 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:51:11.925325  271328 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:51:11.925373  271328 cache.go:56] Caching tarball of preloaded images
	I0813 20:51:11.925616  271328 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:51:11.925640  271328 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:51:11.925773  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:11.925807  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json: {Name:mk3876305492e8ad5450e3976660c9fa1c973e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.029343  271328 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:51:12.029375  271328 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:51:12.029391  271328 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:51:12.029434  271328 start.go:313] acquiring machines lock for auto-20210813204009-13784: {Name:mkd0aba803bc7694302f970fb956ac46569643dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:51:12.029622  271328 start.go:317] acquired machines lock for "auto-20210813204009-13784" in 163.616µs
	I0813 20:51:12.029653  271328 start.go:89] Provisioning new machine with config: &{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:12.029748  271328 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:51:11.473988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:11.474018  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:11.573472  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:11.573526  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:11.658988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.659019  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:11.685635  264876 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.988027  264876 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813204926-13784"
	I0813 20:51:12.521134  264876 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:12.521160  264876 addons.go:344] enableAddons completed in 2.029586792s
	I0813 20:51:12.583342  264876 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:12.585304  264876 out.go:177] 
	W0813 20:51:12.585562  264876 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:12.587605  264876 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:12.589196  264876 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813204926-13784" cluster and "default" namespace by default
	I0813 20:51:08.546768  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.046384  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.546599  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.046701  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.546641  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.046329  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.546622  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.046214  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.546737  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.666694  233224 kubeadm.go:985] duration metric: took 12.281927379s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:12.666726  233224 kubeadm.go:392] StartCluster complete in 5m41.350158589s
	I0813 20:51:12.666746  233224 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.666841  233224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:12.669323  233224 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:13.198236  233224 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813204216-13784" rescaled to 1
	I0813 20:51:13.198297  233224 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:51:13.198331  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:13.200510  233224 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:13.198427  233224 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:13.200649  233224 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200666  233224 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200671  233224 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:13.200686  233224 addons.go:59] Setting dashboard=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.198561  233224 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:13.200707  233224 addons.go:135] Setting addon dashboard=true in "no-preload-20210813204216-13784"
	I0813 20:51:13.200710  233224 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813204216-13784"
	W0813 20:51:13.200714  233224 addons.go:147] addon dashboard should already be in state true
	I0813 20:51:13.200722  233224 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200733  233224 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:13.200743  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200748  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200588  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:13.200700  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200713  233224 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200905  233224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813204216-13784"
	I0813 20:51:13.201200  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201286  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201320  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201369  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.268820  233224 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.268850  233224 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:13.268885  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.269529  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.272105  233224 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.272280  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:13.276915  233224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:13.275633  233224 node_ready.go:49] node "no-preload-20210813204216-13784" has status "Ready":"True"
	I0813 20:51:13.277012  233224 node_ready.go:38] duration metric: took 4.87652ms waiting for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.277035  233224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:13.277050  233224 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.277062  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:13.277114  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.280067  233224 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.282273  233224 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:13.282349  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:13.282360  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:13.282428  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.288178  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:13.302483  233224 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.302581  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:13.302600  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:13.302672  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.364847  233224 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.364873  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:13.364933  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.394311  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.422036  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.432725  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.457704  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.517628  233224 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 20:51:13.528393  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.620168  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:13.620195  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:13.671071  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.681321  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:13.681356  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:13.689240  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:13.689265  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:13.774865  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:13.774905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:13.862937  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:13.862968  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:13.866582  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:13.866605  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:13.965927  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:13.965951  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:13.986024  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:14.070287  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:14.070319  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:14.189473  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:14.189565  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:14.364541  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:14.364569  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:14.492877  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:14.492905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:14.596170  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:14.596202  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:14.663166  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.134726824s)
	I0813 20:51:14.669029  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:15.296512  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.310389487s)
	I0813 20:51:15.296557  233224 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813204216-13784"
	I0813 20:51:15.375159  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.190525  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.521448806s)
	I0813 20:51:12.032028  271328 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:51:12.032292  271328 start.go:160] libmachine.API.Create for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:12.032325  271328 client.go:168] LocalClient.Create starting
	I0813 20:51:12.032388  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:51:12.032418  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032440  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032571  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:51:12.032593  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032613  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032954  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:51:12.084329  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:51:12.084421  271328 network_create.go:255] running [docker network inspect auto-20210813204009-13784] to gather additional debugging logs...
	I0813 20:51:12.084441  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784
	W0813 20:51:12.129703  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 returned with exit code 1
	I0813 20:51:12.129740  271328 network_create.go:258] error running [docker network inspect auto-20210813204009-13784]: docker network inspect auto-20210813204009-13784: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210813204009-13784
	I0813 20:51:12.129756  271328 network_create.go:260] output of [docker network inspect auto-20210813204009-13784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210813204009-13784
	
	** /stderr **
	I0813 20:51:12.129811  271328 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:12.181560  271328 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-e58530d1cbfd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c3:d4:16:b0}}
	I0813 20:51:12.182554  271328 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0003f6078] misses:0}
	I0813 20:51:12.182616  271328 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:51:12.182634  271328 network_create.go:106] attempt to create docker network auto-20210813204009-13784 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:51:12.182698  271328 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210813204009-13784
	I0813 20:51:12.265555  271328 network_create.go:90] docker network auto-20210813204009-13784 192.168.58.0/24 created
	I0813 20:51:12.265592  271328 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20210813204009-13784" container
	I0813 20:51:12.265659  271328 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:51:12.325195  271328 cli_runner.go:115] Run: docker volume create auto-20210813204009-13784 --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:51:12.375214  271328 oci.go:102] Successfully created a docker volume auto-20210813204009-13784
	I0813 20:51:12.375313  271328 cli_runner.go:115] Run: docker run --rm --name auto-20210813204009-13784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --entrypoint /usr/bin/test -v auto-20210813204009-13784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:51:13.255475  271328 oci.go:106] Successfully prepared a docker volume auto-20210813204009-13784
	W0813 20:51:13.255535  271328 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:51:13.255544  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:51:13.255605  271328 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:51:13.255907  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:13.255936  271328 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:51:13.256015  271328 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:51:13.443619  271328 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210813204009-13784 --name auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210813204009-13784 --network auto-20210813204009-13784 --ip 192.168.58.2 --volume auto-20210813204009-13784:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:51:14.118301  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Running}}
	I0813 20:51:14.185140  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:14.236626  271328 cli_runner.go:115] Run: docker exec auto-20210813204009-13784 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:51:14.394377  271328 oci.go:278] the created container "auto-20210813204009-13784" has a running status.
	I0813 20:51:14.394412  271328 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa...
	I0813 20:51:14.559698  271328 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:51:14.962022  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:15.017995  271328 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:51:15.018017  271328 kic_runner.go:115] Args: [docker exec --privileged auto-20210813204009-13784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:51:16.192846  233224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:16.192890  233224 addons.go:344] enableAddons completed in 2.994475177s
	I0813 20:51:17.804083  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.801657  271328 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.545595555s)
	I0813 20:51:17.801693  271328 kic.go:188] duration metric: took 4.545754 seconds to extract preloaded images to volume
	I0813 20:51:17.801770  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:17.842060  271328 machine.go:88] provisioning docker machine ...
	I0813 20:51:17.842103  271328 ubuntu.go:169] provisioning hostname "auto-20210813204009-13784"
	I0813 20:51:17.842167  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:17.880732  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:17.880934  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:17.880952  271328 main.go:130] libmachine: About to run SSH command:
	sudo hostname auto-20210813204009-13784 && echo "auto-20210813204009-13784" | sudo tee /etc/hostname
	I0813 20:51:18.049279  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: auto-20210813204009-13784
	
	I0813 20:51:18.049355  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.089070  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.089215  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.089233  271328 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210813204009-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210813204009-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210813204009-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:51:18.214361  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:51:18.214400  271328 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:51:18.214423  271328 ubuntu.go:177] setting up certificates
	I0813 20:51:18.214435  271328 provision.go:83] configureAuth start
	I0813 20:51:18.214499  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:18.257160  271328 provision.go:138] copyHostCerts
	I0813 20:51:18.257225  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:51:18.257232  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:51:18.257274  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:51:18.257345  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:51:18.257355  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:51:18.257373  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:51:18.257422  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:51:18.257430  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:51:18.257445  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:51:18.257520  271328 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.auto-20210813204009-13784 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210813204009-13784]
	I0813 20:51:18.405685  271328 provision.go:172] copyRemoteCerts
	I0813 20:51:18.405745  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:51:18.405785  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.445891  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:18.536412  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:51:18.553289  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0813 20:51:18.568793  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:51:18.583774  271328 provision.go:86] duration metric: configureAuth took 369.326679ms
	I0813 20:51:18.583798  271328 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:51:18.583946  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:18.584072  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.627524  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.627677  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.627697  271328 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:51:19.012135  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:51:19.012167  271328 machine.go:91] provisioned docker machine in 1.170081385s
	I0813 20:51:19.012178  271328 client.go:171] LocalClient.Create took 6.979844019s
	I0813 20:51:19.012195  271328 start.go:168] duration metric: libmachine.API.Create for "auto-20210813204009-13784" took 6.979905282s
	I0813 20:51:19.012204  271328 start.go:267] post-start starting for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:19.012215  271328 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:51:19.012274  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:51:19.012321  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.051463  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.148765  271328 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:51:19.151322  271328 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:51:19.151341  271328 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:51:19.151349  271328 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:51:19.151355  271328 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:51:19.151364  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:51:19.151409  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:51:19.151507  271328 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:51:19.151607  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:51:19.158200  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:19.176073  271328 start.go:270] post-start completed in 163.849198ms
	I0813 20:51:19.176519  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.224022  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:19.224268  271328 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:51:19.224328  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.265461  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.357703  271328 start.go:129] duration metric: createHost completed in 7.327939716s
	I0813 20:51:19.357731  271328 start.go:80] releasing machines lock for "auto-20210813204009-13784", held for 7.328093299s
	I0813 20:51:19.357829  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.403591  271328 ssh_runner.go:149] Run: systemctl --version
	I0813 20:51:19.403631  271328 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:51:19.403663  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.403725  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.454924  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.455089  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.690299  271328 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:51:19.711263  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:51:19.720449  271328 docker.go:153] disabling docker service ...
	I0813 20:51:19.720510  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:51:19.729566  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:51:19.738541  271328 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:51:19.809055  271328 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:51:19.878138  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:51:19.887210  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:51:19.901071  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.909825  271328 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:51:19.909855  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.918547  271328 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:51:19.925341  271328 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:51:19.925401  271328 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:51:19.932883  271328 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:51:19.939083  271328 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:51:20.008572  271328 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:51:20.019341  271328 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:51:20.019407  271328 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:51:20.022897  271328 start.go:413] Will wait 60s for crictl version
	I0813 20:51:20.022952  271328 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:51:20.049207  271328 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:51:20.049276  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.118062  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.185186  271328 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:51:20.185268  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:20.231193  271328 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:51:20.234527  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.243481  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:20.243537  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.298894  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.298920  271328 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:51:20.298967  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.326049  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.326070  271328 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:51:20.326138  271328 ssh_runner.go:149] Run: crio config
	I0813 20:51:20.405222  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:20.405254  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:20.405269  271328 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:51:20.405286  271328 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210813204009-13784 NodeName:auto-20210813204009-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:51:20.405450  271328 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "auto-20210813204009-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:51:20.406210  271328 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=auto-20210813204009-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:51:20.406291  271328 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:51:20.414073  271328 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:51:20.414143  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:51:20.420611  271328 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (556 bytes)
	I0813 20:51:20.432233  271328 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:51:20.443622  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0813 20:51:20.454650  271328 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:51:20.457221  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.467941  271328 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784 for IP: 192.168.58.2
	I0813 20:51:20.467993  271328 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:51:20.468013  271328 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:51:20.468073  271328 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key
	I0813 20:51:20.468084  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt with IP's: []
	I0813 20:51:20.834054  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt ...
	I0813 20:51:20.834092  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: {Name:mk7fec601fb1fafe5c23646db0e11a54596e8bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834267  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key ...
	I0813 20:51:20.834281  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key: {Name:mk1cae1776891d9f945556a388916d00049fb0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834361  271328 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041
	I0813 20:51:20.834373  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:51:21.063423  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 ...
	I0813 20:51:21.063459  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041: {Name:mk251c4f0d507b09ef6d31c1707428420ec85197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065611  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 ...
	I0813 20:51:21.065633  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041: {Name:mk4d38dae507bc9d1c850061ba3bdb1c6e2ca7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065723  271328 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt
	I0813 20:51:21.065806  271328 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key
	I0813 20:51:21.065871  271328 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key
	I0813 20:51:21.065883  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt with IP's: []
	I0813 20:51:21.152453  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt ...
	I0813 20:51:21.152481  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt: {Name:mke5a626b5b050e50bb47e400c3bba4f5fb88778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152637  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key ...
	I0813 20:51:21.152650  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key: {Name:mkb2a71eb086a15771297e8ab11e852569412fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152807  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:51:21.152843  271328 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:51:21.152855  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:51:21.152880  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:51:21.152909  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:51:21.152931  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:51:21.152971  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:21.153904  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:51:21.171484  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:51:21.187960  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:51:21.205911  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:51:21.223614  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:51:21.239905  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:51:21.255368  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:51:21.271028  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:51:21.286769  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:51:21.302428  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:51:21.317590  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:51:21.336580  271328 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:51:21.355880  271328 ssh_runner.go:149] Run: openssl version
	I0813 20:51:21.361210  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:51:21.368318  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371245  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371283  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.376426  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:51:21.384634  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:51:21.392048  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395072  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395113  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.400410  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:51:21.408727  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:51:21.415718  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418881  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418923  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.423802  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:51:21.431770  271328 kubeadm.go:390] StartCluster: {Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:21.431861  271328 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:51:21.431914  271328 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:21.455876  271328 cri.go:76] found id: ""
	I0813 20:51:21.455927  271328 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:51:21.463196  271328 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:21.471334  271328 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:21.471384  271328 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:21.478565  271328 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:21.478610  271328 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:18.862764  233224 pod_ready.go:92] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.862797  233224 pod_ready.go:81] duration metric: took 5.574582513s waiting for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.862817  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867642  233224 pod_ready.go:92] pod "coredns-78fcd69978-kbf57" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.867658  233224 pod_ready.go:81] duration metric: took 4.833167ms waiting for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867668  233224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:20.879817  233224 pod_ready.go:102] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:21.378531  233224 pod_ready.go:92] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.378554  233224 pod_ready.go:81] duration metric: took 2.510878118s waiting for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.378572  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382866  233224 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.382882  233224 pod_ready.go:81] duration metric: took 4.296091ms waiting for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382892  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386782  233224 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.386801  233224 pod_ready.go:81] duration metric: took 3.90189ms waiting for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386813  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390480  233224 pod_ready.go:92] pod "kube-proxy-vf22v" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.390494  233224 pod_ready.go:81] duration metric: took 3.672888ms waiting for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390501  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604404  233224 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.604433  233224 pod_ready.go:81] duration metric: took 213.923321ms waiting for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604445  233224 pod_ready.go:38] duration metric: took 8.327391702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:21.604469  233224 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:51:21.604523  233224 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:21.685434  233224 api_server.go:70] duration metric: took 8.487094951s to wait for apiserver process to appear ...
	I0813 20:51:21.685459  233224 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:51:21.685471  233224 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:51:21.691084  233224 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:51:21.691907  233224 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:51:21.691929  233224 api_server.go:129] duration metric: took 6.463677ms to wait for apiserver health ...
	I0813 20:51:21.691939  233224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:51:21.806833  233224 system_pods.go:59] 10 kube-system pods found
	I0813 20:51:21.806865  233224 system_pods.go:61] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:21.806872  233224 system_pods.go:61] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:21.806878  233224 system_pods.go:61] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:21.806884  233224 system_pods.go:61] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:21.806890  233224 system_pods.go:61] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:21.806897  233224 system_pods.go:61] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:21.806903  233224 system_pods.go:61] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:21.806909  233224 system_pods.go:61] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:21.806921  233224 system_pods.go:61] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:21.806947  233224 system_pods.go:61] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:21.806955  233224 system_pods.go:74] duration metric: took 115.009603ms to wait for pod list to return data ...
	I0813 20:51:21.806968  233224 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:51:22.003355  233224 default_sa.go:45] found service account: "default"
	I0813 20:51:22.003384  233224 default_sa.go:55] duration metric: took 196.403211ms for default service account to be created ...
	I0813 20:51:22.003397  233224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:51:22.206326  233224 system_pods.go:86] 10 kube-system pods found
	I0813 20:51:22.206359  233224 system_pods.go:89] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:22.206368  233224 system_pods.go:89] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:22.206376  233224 system_pods.go:89] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:22.206382  233224 system_pods.go:89] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:22.206390  233224 system_pods.go:89] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:22.206398  233224 system_pods.go:89] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:22.206407  233224 system_pods.go:89] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:22.206414  233224 system_pods.go:89] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:22.206428  233224 system_pods.go:89] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:22.206438  233224 system_pods.go:89] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:22.206451  233224 system_pods.go:126] duration metric: took 203.046705ms to wait for k8s-apps to be running ...
	I0813 20:51:22.206463  233224 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:51:22.206511  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:22.263444  233224 system_svc.go:56] duration metric: took 56.96766ms WaitForService to wait for kubelet.
	I0813 20:51:22.263482  233224 kubeadm.go:547] duration metric: took 9.065148102s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:51:22.263519  233224 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:51:22.403039  233224 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:51:22.403065  233224 node_conditions.go:123] node cpu capacity is 8
	I0813 20:51:22.403081  233224 node_conditions.go:105] duration metric: took 139.554694ms to run NodePressure ...
	I0813 20:51:22.403096  233224 start.go:231] waiting for startup goroutines ...
	I0813 20:51:22.450275  233224 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:22.455408  233224 out.go:177] 
	W0813 20:51:22.455568  233224 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:22.462541  233224 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:22.464230  233224 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813204216-13784" cluster and "default" namespace by default
	I0813 20:51:21.794120  271328 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:25.163675  271328 out.go:204]   - Booting up control plane ...
	I0813 20:51:28.722579  240241 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.346300289s)
	I0813 20:51:28.722667  240241 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:28.732254  240241 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:28.732318  240241 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:28.757337  240241 cri.go:76] found id: ""
	I0813 20:51:28.757392  240241 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:28.764551  240241 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:28.764599  240241 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:28.771196  240241 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:28.771247  240241 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:29.067432  240241 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:29.947085  240241 out.go:204]   - Booting up control plane ...
	I0813 20:51:40.720555  271328 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:41.136233  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:41.136257  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:41.138470  271328 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:41.138531  271328 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:41.142093  271328 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:41.142114  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:41.159919  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:43.999786  240241 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:44.412673  240241 cni.go:93] Creating CNI manager for ""
	I0813 20:51:44.412698  240241 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:44.414497  240241 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:44.414556  240241 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:44.418236  240241 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:44.418253  240241 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:44.430863  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:41.568473  271328 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:41.568595  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:41.568620  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=auto-20210813204009-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:41.684391  271328 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:41.684482  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:42.252918  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:42.753184  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:43.253340  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:43.752498  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.252543  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.752811  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.253371  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.753399  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.252813  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.663289  240241 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:44.663354  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813204407-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_44_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.663359  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.785476  240241 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:44.785625  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.361034  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.860496  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.360813  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.861457  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.360900  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.860847  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.361284  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.860717  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.361233  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.753324  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.622147  271328 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.868786003s)
	I0813 20:51:48.753354  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.860593  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.861309  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.361330  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.860839  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.360530  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.261881  271328 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.5084884s)
	I0813 20:51:52.752569  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.253464  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.753088  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.252748  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.752605  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.253338  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.752990  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.253395  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.860519  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.360704  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.861401  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.360874  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.861184  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.935142  240241 kubeadm.go:985] duration metric: took 12.271847359s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:56.935173  240241 kubeadm.go:392] StartCluster complete in 5m59.56574911s
	I0813 20:51:56.935192  240241 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:56.935280  240241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:56.936618  240241 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.471369  240241 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813204407-13784" rescaled to 1
	I0813 20:51:57.471434  240241 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:57.473147  240241 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:57.473200  240241 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:57.471473  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:57.471495  240241 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:57.473309  240241 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473332  240241 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473329  240241 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.473341  240241 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:57.473359  240241 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473373  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.471677  240241 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:57.473389  240241 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473397  240241 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473415  240241 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473418  240241 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.473375  240241 addons.go:147] addon dashboard should already be in state true
	W0813 20:51:57.473430  240241 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:57.473453  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.473469  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.473755  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473923  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473970  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473984  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.500075  240241 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813204407-13784" to be "Ready" ...
	I0813 20:51:57.508390  240241 node_ready.go:49] node "default-k8s-different-port-20210813204407-13784" has status "Ready":"True"
	I0813 20:51:57.508412  240241 node_ready.go:38] duration metric: took 8.303909ms waiting for node "default-k8s-different-port-20210813204407-13784" to be "Ready" ...
	I0813 20:51:57.508425  240241 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:57.530074  240241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:57.559993  240241 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:57.561443  240241 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:56.753159  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.252816  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.323178  271328 kubeadm.go:985] duration metric: took 15.754657804s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:57.323205  271328 kubeadm.go:392] StartCluster complete in 35.891441868s
	I0813 20:51:57.323233  271328 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.323334  271328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:57.325280  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.844496  271328 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20210813204009-13784" rescaled to 1
	I0813 20:51:57.844542  271328 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:57.847125  271328 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:57.847179  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:57.844600  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:57.844628  271328 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:51:57.844773  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:57.847273  271328 addons.go:59] Setting storage-provisioner=true in profile "auto-20210813204009-13784"
	I0813 20:51:57.847289  271328 addons.go:135] Setting addon storage-provisioner=true in "auto-20210813204009-13784"
	W0813 20:51:57.847298  271328 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:57.847304  271328 addons.go:59] Setting default-storageclass=true in profile "auto-20210813204009-13784"
	I0813 20:51:57.847325  271328 host.go:66] Checking if "auto-20210813204009-13784" exists ...
	I0813 20:51:57.847330  271328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20210813204009-13784"
	I0813 20:51:57.847657  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.847848  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.914584  271328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:57.914695  271328 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.914708  271328 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:57.914767  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:57.926636  271328 addons.go:135] Setting addon default-storageclass=true in "auto-20210813204009-13784"
	W0813 20:51:57.926670  271328 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:57.926704  271328 host.go:66] Checking if "auto-20210813204009-13784" exists ...
	I0813 20:51:57.927086  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.944440  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:57.946970  271328 node_ready.go:35] waiting up to 5m0s for node "auto-20210813204009-13784" to be "Ready" ...
	I0813 20:51:57.951330  271328 node_ready.go:49] node "auto-20210813204009-13784" has status "Ready":"True"
	I0813 20:51:57.951353  271328 node_ready.go:38] duration metric: took 4.355543ms waiting for node "auto-20210813204009-13784" to be "Ready" ...
	I0813 20:51:57.951367  271328 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:57.964918  271328 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:57.974587  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:57.995812  271328 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:57.995845  271328 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:57.995903  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:58.104226  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:58.127261  271328 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:58.207306  271328 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:58.318052  271328 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 20:51:57.560121  240241 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.562962  240241 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:57.563043  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:57.563058  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:57.563087  240241 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:57.563122  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.563145  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:57.563156  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:57.563204  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.563285  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:57.563317  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.585350  240241 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.585389  240241 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:57.585423  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.586491  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.640285  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.643118  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.651320  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.655597  240241 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:57.655617  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:57.655661  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.659397  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:57.708263  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.772822  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:57.772851  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:57.775665  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:57.775686  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:57.778938  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.866896  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:57.866921  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:57.875909  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:57.875935  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:57.895465  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:57.895493  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:57.906579  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:57.906602  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:57.958953  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:57.977795  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:57.977819  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:57.988125  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:58.065141  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:58.065163  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:58.173899  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:58.173923  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:58.280880  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:58.280914  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:58.289511  240241 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0813 20:51:58.375994  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:58.376079  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:58.488006  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:58.488037  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:58.562447  240241 pod_ready.go:97] error getting pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-94bmz" not found
	I0813 20:51:58.562481  240241 pod_ready.go:81] duration metric: took 1.032368127s waiting for pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:58.562494  240241 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-94bmz" not found
	I0813 20:51:58.562502  240241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:58.578755  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:59.569598  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.79061998s)
	I0813 20:51:59.658034  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69903678s)
	I0813 20:51:59.658141  240241 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:59.658099  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.669942348s)
	I0813 20:52:00.558702  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.979881854s)
	I0813 20:51:58.812728  271328 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:51:58.812772  271328 addons.go:344] enableAddons completed in 968.157461ms
	I0813 20:51:59.995308  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:00.560716  240241 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0813 20:52:00.560785  240241 addons.go:344] enableAddons completed in 3.089294462s
	I0813 20:52:00.667954  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:03.098119  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:02.492544  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:04.992816  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:05.098856  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:07.099285  240241 pod_ready.go:92] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.099314  240241 pod_ready.go:81] duration metric: took 8.536802711s waiting for pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.099327  240241 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.103649  240241 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.103672  240241 pod_ready.go:81] duration metric: took 4.335636ms waiting for pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.103690  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.107793  240241 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.107812  240241 pod_ready.go:81] duration metric: took 4.11268ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.107827  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.114439  240241 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.114457  240241 pod_ready.go:81] duration metric: took 6.620724ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.114469  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5hsp" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.118338  240241 pod_ready.go:92] pod "kube-proxy-f5hsp" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.118352  240241 pod_ready.go:81] duration metric: took 3.876581ms waiting for pod "kube-proxy-f5hsp" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.118361  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.496572  240241 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.496591  240241 pod_ready.go:81] duration metric: took 378.224297ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.496599  240241 pod_ready.go:38] duration metric: took 9.98816095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:07.496618  240241 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:52:07.496655  240241 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:52:07.520058  240241 api_server.go:70] duration metric: took 10.048585682s to wait for apiserver process to appear ...
	I0813 20:52:07.520082  240241 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:52:07.520092  240241 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0813 20:52:07.524876  240241 api_server.go:265] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0813 20:52:07.525872  240241 api_server.go:139] control plane version: v1.21.3
	I0813 20:52:07.525891  240241 api_server.go:129] duration metric: took 5.802306ms to wait for apiserver health ...
	I0813 20:52:07.525914  240241 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:52:07.699622  240241 system_pods.go:59] 9 kube-system pods found
	I0813 20:52:07.699655  240241 system_pods.go:61] "coredns-558bd4d5db-gr5g8" [de1c554f-b8e1-49db-9f29-54dd16296938] Running
	I0813 20:52:07.699660  240241 system_pods.go:61] "etcd-default-k8s-different-port-20210813204407-13784" [fa938417-d881-4a17-bc90-cea2f1e1eb92] Running
	I0813 20:52:07.699664  240241 system_pods.go:61] "kindnet-xg9rd" [b422811b-55ea-4928-823d-0cf4e7b32f3e] Running
	I0813 20:52:07.699669  240241 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813204407-13784" [4b14160e-22dd-4c1f-b1a6-8a15f3d33d36] Running
	I0813 20:52:07.699673  240241 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813204407-13784" [bbac6064-38a3-4fc1-a3fe-61fc1820b250] Running
	I0813 20:52:07.699677  240241 system_pods.go:61] "kube-proxy-f5hsp" [1aed7397-275e-474d-9287-2624feb99c42] Running
	I0813 20:52:07.699681  240241 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813204407-13784" [0ce6078f-1102-4c32-883d-1a4f95d0f4cc] Running
	I0813 20:52:07.699689  240241 system_pods.go:61] "metrics-server-7c784ccb57-44694" [38bf7ccc-7705-4237-a220-b3b4e39f962d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:07.699694  240241 system_pods.go:61] "storage-provisioner" [d9ea2a9c-7cd6-4367-8335-5bdc8cfbcba1] Running
	I0813 20:52:07.699700  240241 system_pods.go:74] duration metric: took 173.777118ms to wait for pod list to return data ...
	I0813 20:52:07.699714  240241 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:52:07.897248  240241 default_sa.go:45] found service account: "default"
	I0813 20:52:07.897273  240241 default_sa.go:55] duration metric: took 197.547768ms for default service account to be created ...
	I0813 20:52:07.897282  240241 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:52:08.100655  240241 system_pods.go:86] 9 kube-system pods found
	I0813 20:52:08.100687  240241 system_pods.go:89] "coredns-558bd4d5db-gr5g8" [de1c554f-b8e1-49db-9f29-54dd16296938] Running
	I0813 20:52:08.100696  240241 system_pods.go:89] "etcd-default-k8s-different-port-20210813204407-13784" [fa938417-d881-4a17-bc90-cea2f1e1eb92] Running
	I0813 20:52:08.100705  240241 system_pods.go:89] "kindnet-xg9rd" [b422811b-55ea-4928-823d-0cf4e7b32f3e] Running
	I0813 20:52:08.100712  240241 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210813204407-13784" [4b14160e-22dd-4c1f-b1a6-8a15f3d33d36] Running
	I0813 20:52:08.100721  240241 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210813204407-13784" [bbac6064-38a3-4fc1-a3fe-61fc1820b250] Running
	I0813 20:52:08.100727  240241 system_pods.go:89] "kube-proxy-f5hsp" [1aed7397-275e-474d-9287-2624feb99c42] Running
	I0813 20:52:08.100734  240241 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210813204407-13784" [0ce6078f-1102-4c32-883d-1a4f95d0f4cc] Running
	I0813 20:52:08.100746  240241 system_pods.go:89] "metrics-server-7c784ccb57-44694" [38bf7ccc-7705-4237-a220-b3b4e39f962d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:08.100756  240241 system_pods.go:89] "storage-provisioner" [d9ea2a9c-7cd6-4367-8335-5bdc8cfbcba1] Running
	I0813 20:52:08.100771  240241 system_pods.go:126] duration metric: took 203.483249ms to wait for k8s-apps to be running ...
	I0813 20:52:08.100783  240241 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:52:08.100832  240241 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:08.111772  240241 system_svc.go:56] duration metric: took 10.982724ms WaitForService to wait for kubelet.
	I0813 20:52:08.111793  240241 kubeadm.go:547] duration metric: took 10.64032656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:52:08.111828  240241 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:52:08.297054  240241 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:52:08.297080  240241 node_conditions.go:123] node cpu capacity is 8
	I0813 20:52:08.297097  240241 node_conditions.go:105] duration metric: took 185.262995ms to run NodePressure ...
	I0813 20:52:08.297110  240241 start.go:231] waiting for startup goroutines ...
	I0813 20:52:08.342344  240241 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:52:08.344774  240241 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210813204407-13784" cluster and "default" namespace by default
	I0813 20:52:06.993296  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:09.493158  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:11.992153  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:14.493122  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:45:51 UTC, end at Fri 2021-08-13 20:52:21 UTC. --
	Aug 13 20:52:01 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:01.676533236Z" level=info msg="Image k8s.gcr.io/echoserver:1.4 not found" id=a9e2a656-ac7d-4c58-be93-de67259da5f2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:01 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:01.676974389Z" level=info msg="Pulling image: k8s.gcr.io/echoserver:1.4" id=787ff5b3-c9ad-4b52-a884-2c8042bf6602 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:52:01 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:01.679348629Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:02 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:02.360254108Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:07 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:07.932772424Z" level=info msg="Pulled image: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=787ff5b3-c9ad-4b52-a884-2c8042bf6602 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:52:07 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:07.933617008Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=98c8181d-84cd-4e83-861a-eeedb965b283 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:07 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:07.934851799Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=98c8181d-84cd-4e83-861a-eeedb965b283 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:07 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:07.935641607Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=6ddef9de-c029-4043-aef8-9240ed5afc22 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.113994750Z" level=info msg="Created container 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=6ddef9de-c029-4043-aef8-9240ed5afc22 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.114517395Z" level=info msg="Starting container: 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1" id=264fb6b6-2a13-4c9b-8df5-31fa4ad16144 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.137593469Z" level=info msg="Started container 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=264fb6b6-2a13-4c9b-8df5-31fa4ad16144 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.913338650Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=135d9bec-6af4-46ad-8df5-d7731eaefd2b name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.915076518Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=135d9bec-6af4-46ad-8df5-d7731eaefd2b name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.915651647Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=c7e4140e-11d8-49eb-a3ac-bb539a9c712d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.917465825Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c7e4140e-11d8-49eb-a3ac-bb539a9c712d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.918287149Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=168b57a0-f06a-4968-867a-73364ddf3d70 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.074016106Z" level=info msg="Created container bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=168b57a0-f06a-4968-867a-73364ddf3d70 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.074541973Z" level=info msg="Starting container: bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c" id=fc86ddee-4924-4486-9da6-54841721437e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.098751233Z" level=info msg="Started container bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=fc86ddee-4924-4486-9da6-54841721437e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.917532097Z" level=info msg="Removing container: 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1" id=ff791e4e-85b3-4452-90c6-3f11b8d9bd80 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.962333142Z" level=info msg="Removed container 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=ff791e4e-85b3-4452-90c6-3f11b8d9bd80 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:14.771551032Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=018bdd6d-25bf-43f9-b48b-dc17c7592f46 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:14.771828072Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=018bdd6d-25bf-43f9-b48b-dc17c7592f46 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:14.772369723Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=83456264-8b9b-4324-b57b-4178055387fd name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:14.783344441Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	bc763b4bcd6bb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   12 seconds ago      Exited              dashboard-metrics-scraper   1                   dccefa6f9ba7f
	eb7f966beddca       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   20 seconds ago      Running             kubernetes-dashboard        0                   664ec32a83d0f
	0643fb501141e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   20 seconds ago      Running             storage-provisioner         0                   2ee6c006120c4
	7bc03b2fb5b97       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   21 seconds ago      Running             coredns                     0                   e036676f80751
	ea2cef428a328       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   22 seconds ago      Running             kube-proxy                  0                   33c4a99332d82
	7cac311527849       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   22 seconds ago      Running             kindnet-cni                 0                   d4e9467385adb
	723cb987e243a       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   44 seconds ago      Running             kube-scheduler              0                   fad404fda12f9
	b889a4cfb9f98       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   44 seconds ago      Running             etcd                        0                   2b6195e25b03e
	a82cc1cca9cca       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   44 seconds ago      Running             kube-apiserver              0                   c49e0bc62636d
	18e273df7ef80       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   44 seconds ago      Running             kube-controller-manager     0                   0c249ea3f831c
	
	* 
	* ==> coredns [7bc03b2fb5b97ade5a5ced7d7239284d3648b6e56d26bae93cf0afb2762b8dd9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 821b10ea3c4cce3a8581cf6a394d92f0
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210813204407-13784
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20210813204407-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=default-k8s-different-port-20210813204407-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_51_44_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:51:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20210813204407-13784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:52:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:51:52 +0000   Fri, 13 Aug 2021 20:51:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:51:52 +0000   Fri, 13 Aug 2021 20:51:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:51:52 +0000   Fri, 13 Aug 2021 20:51:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:51:52 +0000   Fri, 13 Aug 2021 20:51:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20210813204407-13784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                cd5fc00c-b697-4c7f-b544-919a1ee5577b
	  Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-gr5g8                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     25s
	  kube-system                 etcd-default-k8s-different-port-20210813204407-13784                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         40s
	  kube-system                 kindnet-xg9rd                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      25s
	  kube-system                 kube-apiserver-default-k8s-different-port-20210813204407-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20210813204407-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-proxy-f5hsp                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-default-k8s-different-port-20210813204407-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 metrics-server-7c784ccb57-44694                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         23s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-6h949                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-bdmvt                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 33s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet     Node default-k8s-different-port-20210813204407-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet     Node default-k8s-different-port-20210813204407-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet     Node default-k8s-different-port-20210813204407-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 22s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +3.895437] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[ +12.031205] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000003] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +1.787836] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[ +14.060065] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth132654c8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d2 33 13 cb 90 7c 08 06        .......3...|..
	[  +0.492422] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0537654e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 56 dc 40 69 33 08 06        .......V.@i3..
	[Aug13 20:52] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth42b216bb
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 75 7c 88 de fd 08 06        .......u|.....
	[  +0.348033] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth3a91f4fd
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1e a0 d8 e2 a6 b4 08 06        ..............
	[  +7.435044] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +5.490524] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000025] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +2.047860] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000002] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +4.034563] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd8e69602
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 93 4a 9f fb 2d 08 06        ........J..-..
	
	* 
	* ==> etcd [b889a4cfb9f98a0e1a75bc248fc206db415ef20ec427118e4f8f92c02d3ced22] <==
	* 2021-08-13 20:51:37.097787 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:51:37.097917 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 20:51:37.097967 I | embed: listening for peers on 192.168.67.2:2380
	raft2021/08/13 20:51:37 INFO: 8688e899f7831fc7 is starting a new election at term 1
	raft2021/08/13 20:51:37 INFO: 8688e899f7831fc7 became candidate at term 2
	raft2021/08/13 20:51:37 INFO: 8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2
	raft2021/08/13 20:51:37 INFO: 8688e899f7831fc7 became leader at term 2
	raft2021/08/13 20:51:37 INFO: raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2
	2021-08-13 20:51:37.581719 I | etcdserver: published {Name:default-k8s-different-port-20210813204407-13784 ClientURLs:[https://192.168.67.2:2379]} to cluster 9d8fdeb88b6def78
	2021-08-13 20:51:37.581797 I | embed: ready to serve client requests
	2021-08-13 20:51:37.581908 I | embed: ready to serve client requests
	2021-08-13 20:51:37.582224 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:51:37.582509 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:51:37.582587 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:51:37.583446 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:51:37.583519 I | embed: serving client requests on 192.168.67.2:2379
	2021-08-13 20:51:52.030472 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 20:51:52.236940 W | wal: sync duration of 1.661939646s, expected less than 1s
	2021-08-13 20:51:52.240180 W | etcdserver: request "header:<ID:2289934455866129373 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:1fc77b4148d3fbdc>" with result "size:42" took too long (1.665078515s) to execute
	2021-08-13 20:51:52.241536 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210813204407-13784\" " with result "range_response_count:1 size:6219" took too long (2.168556552s) to execute
	2021-08-13 20:51:52.611813 W | etcdserver: read-only range request "key:\"/registry/minions/default-k8s-different-port-20210813204407-13784\" " with result "range_response_count:1 size:6254" took too long (364.358353ms) to execute
	2021-08-13 20:51:52.611996 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (302.425875ms) to execute
	2021-08-13 20:52:01.031580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:09.373063 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:19.373111 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:52:22 up  1:35,  0 users,  load average: 3.82, 2.79, 2.31
	Linux default-k8s-different-port-20210813204407-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a82cc1cca9cca72dbbda29ae20add1371d75089232e162bebda4d7a93f4b7229] <==
	* Trace[1074450952]: ---"Object stored in database" 2172ms (20:51:00.244)
	Trace[1074450952]: [2.172308074s] [2.172308074s] END
	I0813 20:51:52.245449       1 trace.go:205] Trace[657919469]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:49.622) (total time: 2623ms):
	Trace[657919469]: ---"Object stored in database" 2622ms (20:51:00.245)
	Trace[657919469]: [2.623344488s] [2.623344488s] END
	I0813 20:51:52.245930       1 trace.go:205] Trace[1931834109]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-default-k8s-different-port-20210813204407-13784,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:50.072) (total time: 2173ms):
	Trace[1931834109]: ---"About to write a response" 2172ms (20:51:00.244)
	Trace[1931834109]: [2.173555292s] [2.173555292s] END
	I0813 20:51:52.246013       1 trace.go:205] Trace[2043869526]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:50.073) (total time: 2172ms):
	Trace[2043869526]: ---"Object stored in database" 2172ms (20:51:00.245)
	Trace[2043869526]: [2.172760156s] [2.172760156s] END
	I0813 20:51:52.253640       1 trace.go:205] Trace[1982690474]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:50.072) (total time: 2180ms):
	Trace[1982690474]: [2.180775869s] [2.180775869s] END
	I0813 20:51:52.612484       1 trace.go:205] Trace[1766309257]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:49.722) (total time: 2889ms):
	Trace[1766309257]: [2.889770193s] [2.889770193s] END
	I0813 20:51:56.936673       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:51:57.338694       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:51:57.338694       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 20:52:02.175480       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 20:52:02.175545       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:52:02.175553       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 20:52:17.195451       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:52:17.195501       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:52:17.195512       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [18e273df7ef8068f15e66a27a5594ba380dcfee8d6092dc5991709cec2f3b326] <==
	* I0813 20:51:57.496245       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-94bmz"
	I0813 20:51:57.510664       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gr5g8"
	I0813 20:51:57.564602       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-94bmz"
	I0813 20:51:58.980949       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0813 20:51:59.079090       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 20:51:59.189272       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:51:59.288067       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-44694"
	I0813 20:51:59.896636       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 20:51:59.965841       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:59.975187       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:59.976021       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0813 20:51:59.984951       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:59.987350       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:59.987447       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:00.061090       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:00.067469       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:00.067797       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:00.067916       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:00.067974       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:00.073131       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:00.073140       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:00.073194       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:00.073218       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:00.083444       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-bdmvt"
	I0813 20:52:00.180237       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-6h949"
	
	* 
	* ==> kube-proxy [ea2cef428a3288c41b5f0fff2df8ec259ada78d0561b68c1c7b7641097b488dd] <==
	* I0813 20:51:59.887013       1 node.go:172] Successfully retrieved node IP: 192.168.67.2
	I0813 20:51:59.887063       1 server_others.go:140] Detected node IP 192.168.67.2
	W0813 20:51:59.887094       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:52:00.059369       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:52:00.059530       1 server_others.go:212] Using iptables Proxier.
	I0813 20:52:00.059593       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:52:00.059646       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:52:00.060072       1 server.go:643] Version: v1.21.3
	I0813 20:52:00.061180       1 config.go:224] Starting endpoint slice config controller
	I0813 20:52:00.061236       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:52:00.061399       1 config.go:315] Starting service config controller
	I0813 20:52:00.061443       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 20:52:00.068231       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:52:00.070286       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:52:00.162591       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:52:00.163107       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [723cb987e243addf3d24ee09648bd2c1d90f154b96b37e9dd97773224e44d0f9] <==
	* E0813 20:51:41.204699       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:51:41.205724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:51:41.205791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:51:41.205799       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:41.205892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:41.205923       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:51:41.206020       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:41.206034       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:51:41.206097       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:51:41.206135       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:51:41.206246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:51:41.206275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:41.206341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:51:41.206366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:51:42.035864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:51:42.073058       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:51:42.074788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:51:42.084746       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:51:42.150133       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:51:42.179402       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.359310       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.359327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.386800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:51:42.432634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0813 20:51:44.503811       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:45:51 UTC, end at Fri 2021-08-13 20:52:22 UTC. --
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:00.269156    5751 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2px5\" (UniqueName: \"kubernetes.io/projected/7fd5752b-835b-4cc8-9860-861195aef3d6-kube-api-access-z2px5\") pod \"kubernetes-dashboard-6fcdf4f6d-bdmvt\" (UID: \"7fd5752b-835b-4cc8-9860-861195aef3d6\") "
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:00.369995    5751 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/570c27ad-1f22-4a2b-b4b8-d09736125c6d-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-6h949\" (UID: \"570c27ad-1f22-4a2b-b4b8-d09736125c6d\") "
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:00.370074    5751 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59298\" (UniqueName: \"kubernetes.io/projected/570c27ad-1f22-4a2b-b4b8-d09736125c6d-kube-api-access-59298\") pod \"dashboard-metrics-scraper-8685c45546-6h949\" (UID: \"570c27ad-1f22-4a2b-b4b8-d09736125c6d\") "
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.780770    5751 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.780830    5751 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.781004    5751 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-688pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vo
lumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-44694_kube-system(38bf7ccc-7705-4237-a220-b3b4e39f962d): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.781069    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-44694" podUID=38bf7ccc-7705-4237-a220-b3b4e39f962d
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.894368    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-44694" podUID=38bf7ccc-7705-4237-a220-b3b4e39f962d
	Aug 13 20:52:01 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:01.898896    5751 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:08.912935    5751 scope.go:111] "RemoveContainer" containerID="59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1"
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:09.916432    5751 scope.go:111] "RemoveContainer" containerID="59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1"
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:09.916528    5751 scope.go:111] "RemoveContainer" containerID="bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c"
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:09.916903    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-6h949_kubernetes-dashboard(570c27ad-1f22-4a2b-b4b8-d09736125c6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949" podUID=570c27ad-1f22-4a2b-b4b8-d09736125c6d
	Aug 13 20:52:10 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:10.294281    5751 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/docker/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:52:10 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:10.919130    5751 scope.go:111] "RemoveContainer" containerID="bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c"
	Aug 13 20:52:10 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:10.919403    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-6h949_kubernetes-dashboard(570c27ad-1f22-4a2b-b4b8-d09736125c6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949" podUID=570c27ad-1f22-4a2b-b4b8-d09736125c6d
	Aug 13 20:52:11 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:11.921399    5751 scope.go:111] "RemoveContainer" containerID="bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c"
	Aug 13 20:52:11 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:11.921801    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-6h949_kubernetes-dashboard(570c27ad-1f22-4a2b-b4b8-d09736125c6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949" podUID=570c27ad-1f22-4a2b-b4b8-d09736125c6d
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:14.787945    5751 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:14.787986    5751 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:14.788126    5751 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-688pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vo
lumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-44694_kube-system(38bf7ccc-7705-4237-a220-b3b4e39f962d): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:14.788185    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-44694" podUID=38bf7ccc-7705-4237-a220-b3b4e39f962d
	Aug 13 20:52:19 default-k8s-different-port-20210813204407-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:52:19 default-k8s-different-port-20210813204407-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:52:19 default-k8s-different-port-20210813204407-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [eb7f966beddcab227c5a1f1a1ac25ac04ed015d0b9cfa75264ee8239b8c5db6a] <==
	* 2021/08/13 20:52:01 Starting overwatch
	2021/08/13 20:52:01 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:01 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:01 Using secret token for csrf signing
	2021/08/13 20:52:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:01 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 20:52:01 Generating JWE encryption key
	2021/08/13 20:52:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:01 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:01 Creating in-cluster Sidecar client
	2021/08/13 20:52:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:01 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [0643fb501141e4daad159a56c590d69eaffa9d182a31d9e47c66b7dcb2be547b] <==
	* I0813 20:52:01.090211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:52:01.102535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:52:01.102661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:52:01.111281       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:52:01.111425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813204407-13784_3564c416-fafb-451e-9107-9768ac6653a1!
	I0813 20:52:01.111422       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e62df17-df91-4aef-aba3-3fec816f6922", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210813204407-13784_3564c416-fafb-451e-9107-9768ac6653a1 became leader
	I0813 20:52:01.212604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813204407-13784_3564c416-fafb-451e-9107-9768ac6653a1!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784: exit status 2 (323.16494ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204407-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-44694
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204407-13784 describe pod metrics-server-7c784ccb57-44694
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210813204407-13784 describe pod metrics-server-7c784ccb57-44694: exit status 1 (62.393474ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-44694" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20210813204407-13784 describe pod metrics-server-7c784ccb57-44694: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210813204407-13784
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210813204407-13784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7",
	        "Created": "2021-08-13T20:44:09.364516334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240625,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T20:45:51.386618589Z",
	            "FinishedAt": "2021-08-13T20:45:48.826035232Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/hosts",
	        "LogPath": "/var/lib/docker/containers/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7-json.log",
	        "Name": "/default-k8s-different-port-20210813204407-13784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210813204407-13784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210813204407-13784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/469150e9909f7d7be3943d1dd25ec247ae1d7312051fa04e07f493cf91ac43fe-init/diff:/var/lib/docker/overlay2/943fe3220ffbb717e34c3a2f5b5b2fdeda7301ed4a76874eef2e1eb38505e4e8/diff:/var/lib/docker/overlay2/505a13d805b8925ea575203b212746b4c6b752ff72dffd2246c4099558656cc2/diff:/var/lib/docker/overlay2/388c00450b62c7281665d3ca088a24f69af29afc9a5ba3afe0a21475e2ff2113/diff:/var/lib/docker/overlay2/1656818a4d5fa8b14f8693d7d474c0e2986a6d89854bfb563b407d208767b817/diff:/var/lib/docker/overlay2/1b5d87e5adef028fa4372a52b068f2054998271c21595788f747a85e482387ae/diff:/var/lib/docker/overlay2/e83dc221296bc9d384306763217412e25c4373a3fb1709177e561d0ec503fab3/diff:/var/lib/docker/overlay2/771cc4aeb61a7dbfbfef6c923ae0559a88bf8c67dc109575425bb33834fa51ac/diff:/var/lib/docker/overlay2/fa5bcccf4fa296806f1ff06ded1f4fe6fdb7b054278a641b7ac7dc1464d7ccee/diff:/var/lib/docker/overlay2/de575a680f9fdb3f2a1a00c2fefe7e2a68582736d2e2a5d23f166d2dbdeb93b6/diff:/var/lib/docker/overlay2/f65d65
39a139dc0de4ee0441ff54858a3ec211ff795b4e2ce74f351d47a74c04/diff:/var/lib/docker/overlay2/e91bb3d7f035be6908500ddf187b877ae151d98fe9b61d7b6d542a6d02f6a29a/diff:/var/lib/docker/overlay2/9c2fb16b430249faefafab5e7f4bec3ed7bd1bc53e20863dae66c4b0468d48e6/diff:/var/lib/docker/overlay2/a9b534c76af957fd912af6d9f7aed0f4c52061d5faa08ce50ec6da75e4e49638/diff:/var/lib/docker/overlay2/061beb5168fd9e04883e2718e6fe8cbe424f130bfa75396470e02e58ac114097/diff:/var/lib/docker/overlay2/b8d199dc4f23eb2525c5d4c3744c546f43b2523936b4e0fe4c82db936747d0ac/diff:/var/lib/docker/overlay2/166e486948eabd019fce71b07d3b50abf13f01cd6cfdfa6b2c40c24184a7991c/diff:/var/lib/docker/overlay2/d1d5ddb6fc9c5c1ba75980a0eaa8fa1b474afcd74c644e12ff43e5efd6b76484/diff:/var/lib/docker/overlay2/a190859edc6d6c9218cfe4fd20a88e9829a7ccfe1a350a9e481324d8ca0269a1/diff:/var/lib/docker/overlay2/2348996d84d7f9c955152e5743f2234e89556a899e87a82694a8e2cf62535d6e/diff:/var/lib/docker/overlay2/777291c97c2919d81c2bffb1c4597587c5f2b2a583d0ec2cb6196ab46b30305f/diff:/var/lib/d
ocker/overlay2/dc389d804ffe67a250c7d4d08d685a32f0ddd8bc21b51de4c59868b8fe358638/diff:/var/lib/docker/overlay2/db05309a61f33c55297f43253bd58b02599116c4e9deb8e851c2a019fe097889/diff:/var/lib/docker/overlay2/e6b86f7663166fae8d555bf00c37e0013bc2a9d4945cf23328495e40dd16a14e/diff:/var/lib/docker/overlay2/467172f47cbed29bf7834224a18b147bf5660a20762bdec55f353946cc05919d/diff:/var/lib/docker/overlay2/0999b788538a5ffbba2cef6f2946713b89aeeb06aa60a25a6ae4ae47be6cfc96/diff:/var/lib/docker/overlay2/353b2fd589d77741de9edde6bc5851987e93f6c8baf408b99077938d46418995/diff:/var/lib/docker/overlay2/9a87dfa81de6b7028af63aa86b18d12d02296858f9f75e5473d0ba9b99cc14cf/diff:/var/lib/docker/overlay2/477e1f16736ff10f19a7af20d786f82cca4c81c9562ee6d6d177bbf81b8a0d63/diff:/var/lib/docker/overlay2/31be1d8a560f0584acd3129e2bafd672b4362771095fde7ec385c5dab359aecd/diff:/var/lib/docker/overlay2/5c05c1284ce239cd771e8b963fa5aeeaf1b8df170b15d931d5f6b72228a32332/diff:/var/lib/docker/overlay2/592aa512a772483b5529d64187d229065774d4b74dc238c4206d3c21f8c
f267b/diff:/var/lib/docker/overlay2/cb8ce16c2072e24128e29f4960f5aca4f4c36f8ffef40c0b865464e909e4ecd3/diff:/var/lib/docker/overlay2/4e2610e345122070a0cfd1a2c955492220d946b691d9f89adacf9530c3aef942/diff:/var/lib/docker/overlay2/e8d5d9c7379106230380ee3c79e9c881b94bf9cb98dcbeea0c9e1cdbf3dc5ce3/diff:/var/lib/docker/overlay2/1a8a9cf70eb91de766b463ce9afca8876c104833a1fffaa61ba056cc7c0e4679/diff:/var/lib/docker/overlay2/6f371feed28276cff8f843c70e9e04c7eead1e378fbce774f2d393756cddcf5d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469150e9909f7d7be3943d1dd25ec247ae1d7312051fa04e07f493cf91ac43fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469150e9909f7d7be3943d1dd25ec247ae1d7312051fa04e07f493cf91ac43fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469150e9909f7d7be3943d1dd25ec247ae1d7312051fa04e07f493cf91ac43fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210813204407-13784",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210813204407-13784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210813204407-13784",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210813204407-13784",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210813204407-13784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b314eca3090283c97d42b0839d69d25d9425bf11eccdc20c92da930fbc23fb6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32954"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32951"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32953"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32952"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b314eca3090",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210813204407-13784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "be304b8d02d7"
	                    ],
	                    "NetworkID": "41c8a2fb43b67b7fc56fa6f5352beb90f41ea5d6d822a84ea53583b7212324ae",
	                    "EndpointID": "bec5ebb771a21eed2629cbfabb709ddffd96156e8d4abfe6f0f99fdc9763db24",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784: exit status 2 (321.842943ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210813204407-13784 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:47:45 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:48:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:06 UTC | Fri, 13 Aug 2021 20:49:06 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                               |                               |
	|         |  --container-runtime=crio                                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:17 UTC | Fri, 13 Aug 2021 20:49:17 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:19 UTC | Fri, 13 Aug 2021 20:49:20 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20210813204214-13784                       | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:21 UTC | Fri, 13 Aug 2021 20:49:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:22 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813204214-13784            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:49:26 UTC |
	|         | old-k8s-version-20210813204214-13784                       |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:26 UTC | Fri, 13 Aug 2021 20:50:24 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:24 UTC | Fri, 13 Aug 2021 20:50:25 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:25 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:50:46 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:47 UTC | Fri, 13 Aug 2021 20:50:50 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:00 UTC | Fri, 13 Aug 2021 20:51:00 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:03 UTC | Fri, 13 Aug 2021 20:51:03 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20210813204258-13784                           | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:05 UTC | Fri, 13 Aug 2021 20:51:06 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:06 UTC | Fri, 13 Aug 2021 20:51:10 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813204258-13784                | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:11 UTC | Fri, 13 Aug 2021 20:51:11 UTC |
	|         | embed-certs-20210813204258-13784                           |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20210813204926-13784 --memory=2200           | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:46 UTC | Fri, 13 Aug 2021 20:51:12 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813204926-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:13 UTC | Fri, 13 Aug 2021 20:51:13 UTC |
	|         | newest-cni-20210813204926-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:23 UTC | Fri, 13 Aug 2021 20:51:22 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813204216-13784                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:51:41 UTC | Fri, 13 Aug 2021 20:51:41 UTC |
	|         | no-preload-20210813204216-13784                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:45:49 UTC | Fri, 13 Aug 2021 20:52:08 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:18 UTC | Fri, 13 Aug 2021 20:52:19 UTC |
	|         | default-k8s-different-port-20210813204407-13784            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210813204407-13784            | default-k8s-different-port-20210813204407-13784 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:21 UTC | Fri, 13 Aug 2021 20:52:22 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:51:11
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:51:11.626877  271328 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:51:11.627052  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627060  271328 out.go:311] Setting ErrFile to fd 2...
	I0813 20:51:11.627064  271328 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:51:11.627159  271328 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:51:11.627409  271328 out.go:305] Setting JSON to false
	I0813 20:51:11.666661  271328 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":5634,"bootTime":1628882237,"procs":328,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:51:11.666785  271328 start.go:121] virtualization: kvm guest
	I0813 20:51:11.669469  271328 out.go:177] * [auto-20210813204009-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:51:11.669645  271328 notify.go:169] Checking for updates...
	I0813 20:51:11.671319  271328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:11.672833  271328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:51:11.674351  271328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:51:11.675913  271328 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:51:11.676594  271328 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:11.676833  271328 config.go:177] Loaded profile config "newest-cni-20210813204926-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.676967  271328 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:11.677023  271328 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:51:11.731497  271328 docker.go:132] docker version: linux-19.03.15
	I0813 20:51:11.731582  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.824730  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.775305956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.824827  271328 docker.go:244] overlay module found
	I0813 20:51:11.826307  271328 out.go:177] * Using the docker driver based on user configuration
	I0813 20:51:11.826332  271328 start.go:278] selected driver: docker
	I0813 20:51:11.826337  271328 start.go:751] validating driver "docker" against <nil>
	I0813 20:51:11.826355  271328 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:51:11.826409  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:51:11.826435  271328 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 20:51:11.827724  271328 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:51:11.828584  271328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:51:11.921127  271328 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-08-13 20:51:11.870452453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:51:11.921281  271328 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:51:11.921463  271328 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:51:11.921497  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:11.921506  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:11.921514  271328 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:51:11.921523  271328 start_flags.go:277] config:
	{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:11.924012  271328 out.go:177] * Starting control plane node auto-20210813204009-13784 in cluster auto-20210813204009-13784
	I0813 20:51:11.924056  271328 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:51:11.925270  271328 out.go:177] * Pulling base image ...
	I0813 20:51:11.925296  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:11.925327  271328 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:51:11.925325  271328 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:51:11.925373  271328 cache.go:56] Caching tarball of preloaded images
	I0813 20:51:11.925616  271328 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 20:51:11.925640  271328 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 20:51:11.925773  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:11.925807  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json: {Name:mk3876305492e8ad5450e3976660c9fa1c973e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.029343  271328 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:51:12.029375  271328 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:51:12.029391  271328 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:51:12.029434  271328 start.go:313] acquiring machines lock for auto-20210813204009-13784: {Name:mkd0aba803bc7694302f970fb956ac46569643dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:51:12.029622  271328 start.go:317] acquired machines lock for "auto-20210813204009-13784" in 163.616µs
	I0813 20:51:12.029653  271328 start.go:89] Provisioning new machine with config: &{Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:12.029748  271328 start.go:126] createHost starting for "" (driver="docker")
	I0813 20:51:11.473988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:11.474018  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:11.573472  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:11.573526  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:11.658988  264876 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.659019  264876 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:11.685635  264876 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:11.988027  264876 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813204926-13784"
	I0813 20:51:12.521134  264876 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:12.521160  264876 addons.go:344] enableAddons completed in 2.029586792s
	I0813 20:51:12.583342  264876 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:12.585304  264876 out.go:177] 
	W0813 20:51:12.585562  264876 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:12.587605  264876 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:12.589196  264876 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813204926-13784" cluster and "default" namespace by default
	I0813 20:51:08.546768  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.046384  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:09.546599  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.046701  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:10.546641  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.046329  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:11.546622  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.046214  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.546737  233224 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:12.666694  233224 kubeadm.go:985] duration metric: took 12.281927379s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:12.666726  233224 kubeadm.go:392] StartCluster complete in 5m41.350158589s
	I0813 20:51:12.666746  233224 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:12.666841  233224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:12.669323  233224 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:13.198236  233224 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813204216-13784" rescaled to 1
	I0813 20:51:13.198297  233224 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 20:51:13.198331  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:13.200510  233224 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:13.198427  233224 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:13.200649  233224 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200666  233224 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200671  233224 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:13.200686  233224 addons.go:59] Setting dashboard=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.198561  233224 config.go:177] Loaded profile config "no-preload-20210813204216-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.0-rc.0
	I0813 20:51:13.200707  233224 addons.go:135] Setting addon dashboard=true in "no-preload-20210813204216-13784"
	I0813 20:51:13.200710  233224 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813204216-13784"
	W0813 20:51:13.200714  233224 addons.go:147] addon dashboard should already be in state true
	I0813 20:51:13.200722  233224 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.200733  233224 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:13.200743  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200748  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200588  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:13.200700  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.200713  233224 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813204216-13784"
	I0813 20:51:13.200905  233224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813204216-13784"
	I0813 20:51:13.201200  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201286  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201320  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.201369  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.268820  233224 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813204216-13784"
	W0813 20:51:13.268850  233224 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:13.268885  233224 host.go:66] Checking if "no-preload-20210813204216-13784" exists ...
	I0813 20:51:13.269529  233224 cli_runner.go:115] Run: docker container inspect no-preload-20210813204216-13784 --format={{.State.Status}}
	I0813 20:51:13.272105  233224 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.272280  233224 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:13.276915  233224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:13.275633  233224 node_ready.go:49] node "no-preload-20210813204216-13784" has status "Ready":"True"
	I0813 20:51:13.277012  233224 node_ready.go:38] duration metric: took 4.87652ms waiting for node "no-preload-20210813204216-13784" to be "Ready" ...
	I0813 20:51:13.277035  233224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:13.277050  233224 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.277062  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:13.277114  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.280067  233224 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.282273  233224 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:13.282349  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:13.282360  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:13.282428  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.288178  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:13.302483  233224 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:13.302581  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:13.302600  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:13.302672  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.364847  233224 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.364873  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:13.364933  233224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210813204216-13784
	I0813 20:51:13.394311  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.422036  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.432725  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.457704  233224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32945 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813204216-13784/id_rsa Username:docker}
	I0813 20:51:13.517628  233224 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 20:51:13.528393  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:13.620168  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:13.620195  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:13.671071  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:13.681321  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:13.681356  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:13.689240  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:13.689265  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:13.774865  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:13.774905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:13.862937  233224 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:13.862968  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:13.866582  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:13.866605  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:13.965927  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:13.965951  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:13.986024  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:14.070287  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:14.070319  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:14.189473  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:14.189565  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:14.364541  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:14.364569  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:14.492877  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:14.492905  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:14.596170  233224 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:14.596202  233224 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:14.663166  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.134726824s)
	I0813 20:51:14.669029  233224 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:15.296512  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.310389487s)
	I0813 20:51:15.296557  233224 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813204216-13784"
	I0813 20:51:15.375159  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:16.190525  233224 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.521448806s)
	I0813 20:51:12.032028  271328 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 20:51:12.032292  271328 start.go:160] libmachine.API.Create for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:12.032325  271328 client.go:168] LocalClient.Create starting
	I0813 20:51:12.032388  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:51:12.032418  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032440  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032571  271328 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:51:12.032593  271328 main.go:130] libmachine: Decoding PEM data...
	I0813 20:51:12.032613  271328 main.go:130] libmachine: Parsing certificate...
	I0813 20:51:12.032954  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 20:51:12.084329  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 20:51:12.084421  271328 network_create.go:255] running [docker network inspect auto-20210813204009-13784] to gather additional debugging logs...
	I0813 20:51:12.084441  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784
	W0813 20:51:12.129703  271328 cli_runner.go:162] docker network inspect auto-20210813204009-13784 returned with exit code 1
	I0813 20:51:12.129740  271328 network_create.go:258] error running [docker network inspect auto-20210813204009-13784]: docker network inspect auto-20210813204009-13784: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210813204009-13784
	I0813 20:51:12.129756  271328 network_create.go:260] output of [docker network inspect auto-20210813204009-13784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210813204009-13784
	
	** /stderr **
	I0813 20:51:12.129811  271328 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:12.181560  271328 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-e58530d1cbfd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c3:d4:16:b0}}
	I0813 20:51:12.182554  271328 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0003f6078] misses:0}
	I0813 20:51:12.182616  271328 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:51:12.182634  271328 network_create.go:106] attempt to create docker network auto-20210813204009-13784 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 20:51:12.182698  271328 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210813204009-13784
	I0813 20:51:12.265555  271328 network_create.go:90] docker network auto-20210813204009-13784 192.168.58.0/24 created
	I0813 20:51:12.265592  271328 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20210813204009-13784" container
	I0813 20:51:12.265659  271328 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 20:51:12.325195  271328 cli_runner.go:115] Run: docker volume create auto-20210813204009-13784 --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true
	I0813 20:51:12.375214  271328 oci.go:102] Successfully created a docker volume auto-20210813204009-13784
	I0813 20:51:12.375313  271328 cli_runner.go:115] Run: docker run --rm --name auto-20210813204009-13784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --entrypoint /usr/bin/test -v auto-20210813204009-13784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0813 20:51:13.255475  271328 oci.go:106] Successfully prepared a docker volume auto-20210813204009-13784
	W0813 20:51:13.255535  271328 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 20:51:13.255544  271328 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 20:51:13.255605  271328 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 20:51:13.255907  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:13.255936  271328 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 20:51:13.256015  271328 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 20:51:13.443619  271328 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210813204009-13784 --name auto-20210813204009-13784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210813204009-13784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210813204009-13784 --network auto-20210813204009-13784 --ip 192.168.58.2 --volume auto-20210813204009-13784:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0813 20:51:14.118301  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Running}}
	I0813 20:51:14.185140  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:14.236626  271328 cli_runner.go:115] Run: docker exec auto-20210813204009-13784 stat /var/lib/dpkg/alternatives/iptables
	I0813 20:51:14.394377  271328 oci.go:278] the created container "auto-20210813204009-13784" has a running status.
	I0813 20:51:14.394412  271328 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa...
	I0813 20:51:14.559698  271328 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 20:51:14.962022  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:15.017995  271328 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 20:51:15.018017  271328 kic_runner.go:115] Args: [docker exec --privileged auto-20210813204009-13784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 20:51:16.192846  233224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 20:51:16.192890  233224 addons.go:344] enableAddons completed in 2.994475177s
	I0813 20:51:17.804083  233224 pod_ready.go:102] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:17.801657  271328 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210813204009-13784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.545595555s)
	I0813 20:51:17.801693  271328 kic.go:188] duration metric: took 4.545754 seconds to extract preloaded images to volume
	I0813 20:51:17.801770  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:17.842060  271328 machine.go:88] provisioning docker machine ...
	I0813 20:51:17.842103  271328 ubuntu.go:169] provisioning hostname "auto-20210813204009-13784"
	I0813 20:51:17.842167  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:17.880732  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:17.880934  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:17.880952  271328 main.go:130] libmachine: About to run SSH command:
	sudo hostname auto-20210813204009-13784 && echo "auto-20210813204009-13784" | sudo tee /etc/hostname
	I0813 20:51:18.049279  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: auto-20210813204009-13784
	
	I0813 20:51:18.049355  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.089070  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.089215  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.089233  271328 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210813204009-13784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210813204009-13784/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210813204009-13784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:51:18.214361  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:51:18.214400  271328 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem ServerCertRemoteP
ath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:51:18.214423  271328 ubuntu.go:177] setting up certificates
	I0813 20:51:18.214435  271328 provision.go:83] configureAuth start
	I0813 20:51:18.214499  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:18.257160  271328 provision.go:138] copyHostCerts
	I0813 20:51:18.257225  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:51:18.257232  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:51:18.257274  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1679 bytes)
	I0813 20:51:18.257345  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:51:18.257355  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:51:18.257373  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:51:18.257422  271328 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:51:18.257430  271328 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:51:18.257445  271328 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:51:18.257520  271328 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.auto-20210813204009-13784 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210813204009-13784]
	I0813 20:51:18.405685  271328 provision.go:172] copyRemoteCerts
	I0813 20:51:18.405745  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:51:18.405785  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.445891  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:18.536412  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:51:18.553289  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0813 20:51:18.568793  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:51:18.583774  271328 provision.go:86] duration metric: configureAuth took 369.326679ms
	I0813 20:51:18.583798  271328 ubuntu.go:193] setting minikube options for container-runtime
	I0813 20:51:18.583946  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:18.584072  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:18.627524  271328 main.go:130] libmachine: Using SSH client type: native
	I0813 20:51:18.627677  271328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 32975 <nil> <nil>}
	I0813 20:51:18.627697  271328 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 20:51:19.012135  271328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 20:51:19.012167  271328 machine.go:91] provisioned docker machine in 1.170081385s
	I0813 20:51:19.012178  271328 client.go:171] LocalClient.Create took 6.979844019s
	I0813 20:51:19.012195  271328 start.go:168] duration metric: libmachine.API.Create for "auto-20210813204009-13784" took 6.979905282s
	I0813 20:51:19.012204  271328 start.go:267] post-start starting for "auto-20210813204009-13784" (driver="docker")
	I0813 20:51:19.012215  271328 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:51:19.012274  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:51:19.012321  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.051463  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.148765  271328 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:51:19.151322  271328 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 20:51:19.151341  271328 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 20:51:19.151349  271328 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 20:51:19.151355  271328 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 20:51:19.151364  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:51:19.151409  271328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:51:19.151507  271328 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem -> 137842.pem in /etc/ssl/certs
	I0813 20:51:19.151607  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:51:19.158200  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:19.176073  271328 start.go:270] post-start completed in 163.849198ms
	I0813 20:51:19.176519  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.224022  271328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/config.json ...
	I0813 20:51:19.224268  271328 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:51:19.224328  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.265461  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.357703  271328 start.go:129] duration metric: createHost completed in 7.327939716s
	I0813 20:51:19.357731  271328 start.go:80] releasing machines lock for "auto-20210813204009-13784", held for 7.328093299s
	I0813 20:51:19.357829  271328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210813204009-13784
	I0813 20:51:19.403591  271328 ssh_runner.go:149] Run: systemctl --version
	I0813 20:51:19.403631  271328 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:51:19.403663  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.403725  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:19.454924  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.455089  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:19.690299  271328 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 20:51:19.711263  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 20:51:19.720449  271328 docker.go:153] disabling docker service ...
	I0813 20:51:19.720510  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:51:19.729566  271328 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:51:19.738541  271328 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:51:19.809055  271328 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:51:19.878138  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:51:19.887210  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:51:19.901071  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.909825  271328 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 20:51:19.909855  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 20:51:19.918547  271328 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:51:19.925341  271328 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:51:19.925401  271328 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:51:19.932883  271328 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:51:19.939083  271328 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:51:20.008572  271328 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 20:51:20.019341  271328 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 20:51:20.019407  271328 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 20:51:20.022897  271328 start.go:413] Will wait 60s for crictl version
	I0813 20:51:20.022952  271328 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:51:20.049207  271328 start.go:422] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 20:51:20.049276  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.118062  271328 ssh_runner.go:149] Run: crio --version
	I0813 20:51:20.185186  271328 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 20:51:20.185268  271328 cli_runner.go:115] Run: docker network inspect auto-20210813204009-13784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 20:51:20.231193  271328 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 20:51:20.234527  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.243481  271328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:51:20.243537  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.298894  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.298920  271328 crio.go:333] Images already preloaded, skipping extraction
	I0813 20:51:20.298967  271328 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:51:20.326049  271328 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 20:51:20.326070  271328 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:51:20.326138  271328 ssh_runner.go:149] Run: crio config
	I0813 20:51:20.405222  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:20.405254  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:20.405269  271328 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:51:20.405286  271328 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210813204009-13784 NodeName:auto-20210813204009-13784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:51:20.405450  271328 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "auto-20210813204009-13784"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:51:20.406210  271328 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=auto-20210813204009-13784 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:51:20.406291  271328 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:51:20.414073  271328 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:51:20.414143  271328 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:51:20.420611  271328 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (556 bytes)
	I0813 20:51:20.432233  271328 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:51:20.443622  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2066 bytes)
	I0813 20:51:20.454650  271328 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 20:51:20.457221  271328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:51:20.467941  271328 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784 for IP: 192.168.58.2
	I0813 20:51:20.467993  271328 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:51:20.468013  271328 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:51:20.468073  271328 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key
	I0813 20:51:20.468084  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt with IP's: []
	I0813 20:51:20.834054  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt ...
	I0813 20:51:20.834092  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: {Name:mk7fec601fb1fafe5c23646db0e11a54596e8bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834267  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key ...
	I0813 20:51:20.834281  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.key: {Name:mk1cae1776891d9f945556a388916d00049fb0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:20.834361  271328 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041
	I0813 20:51:20.834373  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:51:21.063423  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 ...
	I0813 20:51:21.063459  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041: {Name:mk251c4f0d507b09ef6d31c1707428420ec85197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065611  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 ...
	I0813 20:51:21.065633  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041: {Name:mk4d38dae507bc9d1c850061ba3bdb1c6e2ca7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.065723  271328 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt
	I0813 20:51:21.065806  271328 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key
	I0813 20:51:21.065871  271328 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key
	I0813 20:51:21.065883  271328 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt with IP's: []
	I0813 20:51:21.152453  271328 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt ...
	I0813 20:51:21.152481  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt: {Name:mke5a626b5b050e50bb47e400c3bba4f5fb88778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152637  271328 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key ...
	I0813 20:51:21.152650  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key: {Name:mkb2a71eb086a15771297e8ab11e852569412fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:21.152807  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem (1338 bytes)
	W0813 20:51:21.152843  271328 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784_empty.pem, impossibly tiny 0 bytes
	I0813 20:51:21.152855  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:51:21.152880  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:51:21.152909  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:51:21.152931  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1679 bytes)
	I0813 20:51:21.152971  271328 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem (1708 bytes)
	I0813 20:51:21.153904  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:51:21.171484  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 20:51:21.187960  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:51:21.205911  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:51:21.223614  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:51:21.239905  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:51:21.255368  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:51:21.271028  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 20:51:21.286769  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/137842.pem --> /usr/share/ca-certificates/137842.pem (1708 bytes)
	I0813 20:51:21.302428  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:51:21.317590  271328 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/13784.pem --> /usr/share/ca-certificates/13784.pem (1338 bytes)
	I0813 20:51:21.336580  271328 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:51:21.355880  271328 ssh_runner.go:149] Run: openssl version
	I0813 20:51:21.361210  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:51:21.368318  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371245  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.371283  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:51:21.376426  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:51:21.384634  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13784.pem && ln -fs /usr/share/ca-certificates/13784.pem /etc/ssl/certs/13784.pem"
	I0813 20:51:21.392048  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395072  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:19 /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.395113  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13784.pem
	I0813 20:51:21.400410  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13784.pem /etc/ssl/certs/51391683.0"
	I0813 20:51:21.408727  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137842.pem && ln -fs /usr/share/ca-certificates/137842.pem /etc/ssl/certs/137842.pem"
	I0813 20:51:21.415718  271328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418881  271328 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:19 /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.418923  271328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137842.pem
	I0813 20:51:21.423802  271328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137842.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:51:21.431770  271328 kubeadm.go:390] StartCluster: {Name:auto-20210813204009-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813204009-13784 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:51:21.431861  271328 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 20:51:21.431914  271328 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:21.455876  271328 cri.go:76] found id: ""
	I0813 20:51:21.455927  271328 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:51:21.463196  271328 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:21.471334  271328 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:21.471384  271328 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:21.478565  271328 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:21.478610  271328 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:18.862764  233224 pod_ready.go:92] pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.862797  233224 pod_ready.go:81] duration metric: took 5.574582513s waiting for pod "coredns-78fcd69978-jrfnc" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.862817  233224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867642  233224 pod_ready.go:92] pod "coredns-78fcd69978-kbf57" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:18.867658  233224 pod_ready.go:81] duration metric: took 4.833167ms waiting for pod "coredns-78fcd69978-kbf57" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:18.867668  233224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:20.879817  233224 pod_ready.go:102] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"False"
	I0813 20:51:21.378531  233224 pod_ready.go:92] pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.378554  233224 pod_ready.go:81] duration metric: took 2.510878118s waiting for pod "etcd-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.378572  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382866  233224 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.382882  233224 pod_ready.go:81] duration metric: took 4.296091ms waiting for pod "kube-apiserver-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.382892  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386782  233224 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.386801  233224 pod_ready.go:81] duration metric: took 3.90189ms waiting for pod "kube-controller-manager-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.386813  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390480  233224 pod_ready.go:92] pod "kube-proxy-vf22v" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.390494  233224 pod_ready.go:81] duration metric: took 3.672888ms waiting for pod "kube-proxy-vf22v" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.390501  233224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604404  233224 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:51:21.604433  233224 pod_ready.go:81] duration metric: took 213.923321ms waiting for pod "kube-scheduler-no-preload-20210813204216-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:21.604445  233224 pod_ready.go:38] duration metric: took 8.327391702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:21.604469  233224 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:51:21.604523  233224 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:51:21.685434  233224 api_server.go:70] duration metric: took 8.487094951s to wait for apiserver process to appear ...
	I0813 20:51:21.685459  233224 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:51:21.685471  233224 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:51:21.691084  233224 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:51:21.691907  233224 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 20:51:21.691929  233224 api_server.go:129] duration metric: took 6.463677ms to wait for apiserver health ...
	I0813 20:51:21.691939  233224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:51:21.806833  233224 system_pods.go:59] 10 kube-system pods found
	I0813 20:51:21.806865  233224 system_pods.go:61] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:21.806872  233224 system_pods.go:61] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:21.806878  233224 system_pods.go:61] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:21.806884  233224 system_pods.go:61] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:21.806890  233224 system_pods.go:61] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:21.806897  233224 system_pods.go:61] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:21.806903  233224 system_pods.go:61] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:21.806909  233224 system_pods.go:61] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:21.806921  233224 system_pods.go:61] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:21.806947  233224 system_pods.go:61] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:21.806955  233224 system_pods.go:74] duration metric: took 115.009603ms to wait for pod list to return data ...
	I0813 20:51:21.806968  233224 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:51:22.003355  233224 default_sa.go:45] found service account: "default"
	I0813 20:51:22.003384  233224 default_sa.go:55] duration metric: took 196.403211ms for default service account to be created ...
	I0813 20:51:22.003397  233224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:51:22.206326  233224 system_pods.go:86] 10 kube-system pods found
	I0813 20:51:22.206359  233224 system_pods.go:89] "coredns-78fcd69978-jrfnc" [ceeda09d-5a80-436b-9861-2128ef376588] Running
	I0813 20:51:22.206368  233224 system_pods.go:89] "coredns-78fcd69978-kbf57" [fe861d02-23aa-4feb-a9f7-53652d9f9906] Running
	I0813 20:51:22.206376  233224 system_pods.go:89] "etcd-no-preload-20210813204216-13784" [51a3ae08-48e3-4805-b08b-eb70b524351f] Running
	I0813 20:51:22.206382  233224 system_pods.go:89] "kindnet-tm5jp" [bedfd62d-a6db-4098-9890-d2fccbc634e4] Running
	I0813 20:51:22.206390  233224 system_pods.go:89] "kube-apiserver-no-preload-20210813204216-13784" [59aabf29-8dce-4dda-bef6-d3c7da1e7df4] Running
	I0813 20:51:22.206398  233224 system_pods.go:89] "kube-controller-manager-no-preload-20210813204216-13784" [280ea412-18bd-43ae-bb17-d91becdd137a] Running
	I0813 20:51:22.206407  233224 system_pods.go:89] "kube-proxy-vf22v" [2fd53b6e-42f1-4aa7-8509-d97e750fbf72] Running
	I0813 20:51:22.206414  233224 system_pods.go:89] "kube-scheduler-no-preload-20210813204216-13784" [d7a21281-b273-404d-8dae-e144005780e3] Running
	I0813 20:51:22.206428  233224 system_pods.go:89] "metrics-server-7c784ccb57-trj2k" [8f30b352-ee9a-4412-a279-83a5caa024bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:51:22.206438  233224 system_pods.go:89] "storage-provisioner" [c60688b2-6884-491b-b31c-f749638f55d3] Running
	I0813 20:51:22.206451  233224 system_pods.go:126] duration metric: took 203.046705ms to wait for k8s-apps to be running ...
	I0813 20:51:22.206463  233224 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:51:22.206511  233224 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:22.263444  233224 system_svc.go:56] duration metric: took 56.96766ms WaitForService to wait for kubelet.
	I0813 20:51:22.263482  233224 kubeadm.go:547] duration metric: took 9.065148102s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:51:22.263519  233224 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:51:22.403039  233224 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:51:22.403065  233224 node_conditions.go:123] node cpu capacity is 8
	I0813 20:51:22.403081  233224 node_conditions.go:105] duration metric: took 139.554694ms to run NodePressure ...
	I0813 20:51:22.403096  233224 start.go:231] waiting for startup goroutines ...
	I0813 20:51:22.450275  233224 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 20:51:22.455408  233224 out.go:177] 
	W0813 20:51:22.455568  233224 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 20:51:22.462541  233224 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 20:51:22.464230  233224 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813204216-13784" cluster and "default" namespace by default
	I0813 20:51:21.794120  271328 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:25.163675  271328 out.go:204]   - Booting up control plane ...
	I0813 20:51:28.722579  240241 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.346300289s)
	I0813 20:51:28.722667  240241 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 20:51:28.732254  240241 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 20:51:28.732318  240241 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:51:28.757337  240241 cri.go:76] found id: ""
	I0813 20:51:28.757392  240241 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:51:28.764551  240241 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 20:51:28.764599  240241 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:51:28.771196  240241 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:51:28.771247  240241 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 20:51:29.067432  240241 out.go:204]   - Generating certificates and keys ...
	I0813 20:51:29.947085  240241 out.go:204]   - Booting up control plane ...
	I0813 20:51:40.720555  271328 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:41.136233  271328 cni.go:93] Creating CNI manager for ""
	I0813 20:51:41.136257  271328 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:41.138470  271328 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:41.138531  271328 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:41.142093  271328 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:41.142114  271328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:41.159919  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:43.999786  240241 out.go:204]   - Configuring RBAC rules ...
	I0813 20:51:44.412673  240241 cni.go:93] Creating CNI manager for ""
	I0813 20:51:44.412698  240241 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:51:44.414497  240241 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 20:51:44.414556  240241 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 20:51:44.418236  240241 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 20:51:44.418253  240241 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 20:51:44.430863  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 20:51:41.568473  271328 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:41.568595  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:41.568620  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=auto-20210813204009-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:41.684391  271328 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:41.684482  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:42.252918  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:42.753184  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:43.253340  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:43.752498  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.252543  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.752811  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.253371  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.753399  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.252813  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.663289  240241 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:51:44.663354  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813204407-13784 minikube.k8s.io/updated_at=2021_08_13T20_51_44_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.663359  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:44.785476  240241 ops.go:34] apiserver oom_adj: -16
	I0813 20:51:44.785625  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.361034  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:45.860496  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.360813  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.861457  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.360900  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:47.860847  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.361284  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.860717  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.361233  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:46.753324  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:48.622147  271328 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.868786003s)
	I0813 20:51:48.753354  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:49.860593  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.861309  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.361330  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.860839  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.360530  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:52.261881  271328 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.5084884s)
	I0813 20:51:52.752569  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.253464  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:53.753088  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.252748  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.752605  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.253338  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.752990  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.253395  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:54.860519  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.360704  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:55.861401  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.360874  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.861184  240241 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:56.935142  240241 kubeadm.go:985] duration metric: took 12.271847359s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:56.935173  240241 kubeadm.go:392] StartCluster complete in 5m59.56574911s
	I0813 20:51:56.935192  240241 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:56.935280  240241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:56.936618  240241 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.471369  240241 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813204407-13784" rescaled to 1
	I0813 20:51:57.471434  240241 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:57.473147  240241 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:57.473200  240241 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:57.471473  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:57.471495  240241 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 20:51:57.473309  240241 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473332  240241 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473329  240241 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.473341  240241 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:57.473359  240241 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473373  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.471677  240241 config.go:177] Loaded profile config "default-k8s-different-port-20210813204407-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:57.473389  240241 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473397  240241 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473415  240241 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:57.473418  240241 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.473375  240241 addons.go:147] addon dashboard should already be in state true
	W0813 20:51:57.473430  240241 addons.go:147] addon metrics-server should already be in state true
	I0813 20:51:57.473453  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.473469  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.473755  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473923  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473970  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.473984  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.500075  240241 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813204407-13784" to be "Ready" ...
	I0813 20:51:57.508390  240241 node_ready.go:49] node "default-k8s-different-port-20210813204407-13784" has status "Ready":"True"
	I0813 20:51:57.508412  240241 node_ready.go:38] duration metric: took 8.303909ms waiting for node "default-k8s-different-port-20210813204407-13784" to be "Ready" ...
	I0813 20:51:57.508425  240241 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:57.530074  240241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:57.559993  240241 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:57.561443  240241 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 20:51:56.753159  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.252816  271328 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:51:57.323178  271328 kubeadm.go:985] duration metric: took 15.754657804s to wait for elevateKubeSystemPrivileges.
	I0813 20:51:57.323205  271328 kubeadm.go:392] StartCluster complete in 35.891441868s
	I0813 20:51:57.323233  271328 settings.go:142] acquiring lock: {Name:mk2a7ffb12ba5b287af223c99b3f78a4e9868883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.323334  271328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:51:57.325280  271328 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk996f07f55c1915aca44637a5f4821f34970d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:51:57.844496  271328 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20210813204009-13784" rescaled to 1
	I0813 20:51:57.844542  271328 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:51:57.847125  271328 out.go:177] * Verifying Kubernetes components...
	I0813 20:51:57.847179  271328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:51:57.844600  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:51:57.844628  271328 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:51:57.844773  271328 config.go:177] Loaded profile config "auto-20210813204009-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:51:57.847273  271328 addons.go:59] Setting storage-provisioner=true in profile "auto-20210813204009-13784"
	I0813 20:51:57.847289  271328 addons.go:135] Setting addon storage-provisioner=true in "auto-20210813204009-13784"
	W0813 20:51:57.847298  271328 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:51:57.847304  271328 addons.go:59] Setting default-storageclass=true in profile "auto-20210813204009-13784"
	I0813 20:51:57.847325  271328 host.go:66] Checking if "auto-20210813204009-13784" exists ...
	I0813 20:51:57.847330  271328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20210813204009-13784"
	I0813 20:51:57.847657  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.847848  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.914584  271328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:51:57.914695  271328 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.914708  271328 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:57.914767  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:57.926636  271328 addons.go:135] Setting addon default-storageclass=true in "auto-20210813204009-13784"
	W0813 20:51:57.926670  271328 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:57.926704  271328 host.go:66] Checking if "auto-20210813204009-13784" exists ...
	I0813 20:51:57.927086  271328 cli_runner.go:115] Run: docker container inspect auto-20210813204009-13784 --format={{.State.Status}}
	I0813 20:51:57.944440  271328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:57.946970  271328 node_ready.go:35] waiting up to 5m0s for node "auto-20210813204009-13784" to be "Ready" ...
	I0813 20:51:57.951330  271328 node_ready.go:49] node "auto-20210813204009-13784" has status "Ready":"True"
	I0813 20:51:57.951353  271328 node_ready.go:38] duration metric: took 4.355543ms waiting for node "auto-20210813204009-13784" to be "Ready" ...
	I0813 20:51:57.951367  271328 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:51:57.964918  271328 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:57.974587  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:57.995812  271328 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:57.995845  271328 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:57.995903  271328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210813204009-13784
	I0813 20:51:58.104226  271328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32975 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813204009-13784/id_rsa Username:docker}
	I0813 20:51:58.127261  271328 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:58.207306  271328 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:58.318052  271328 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 20:51:57.560121  240241 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.562962  240241 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 20:51:57.563043  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 20:51:57.563058  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 20:51:57.563087  240241 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 20:51:57.563122  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.563145  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:51:57.563156  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:51:57.563204  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.563285  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:51:57.563317  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.585350  240241 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813204407-13784"
	W0813 20:51:57.585389  240241 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:51:57.585423  240241 host.go:66] Checking if "default-k8s-different-port-20210813204407-13784" exists ...
	I0813 20:51:57.586491  240241 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210813204407-13784 --format={{.State.Status}}
	I0813 20:51:57.640285  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.643118  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.651320  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.655597  240241 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:57.655617  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:51:57.655661  240241 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210813204407-13784
	I0813 20:51:57.659397  240241 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:51:57.708263  240241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813204407-13784/id_rsa Username:docker}
	I0813 20:51:57.772822  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 20:51:57.772851  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 20:51:57.775665  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:51:57.775686  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 20:51:57.778938  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:51:57.866896  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:51:57.866921  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:51:57.875909  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 20:51:57.875935  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 20:51:57.895465  240241 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:57.895493  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:51:57.906579  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 20:51:57.906602  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 20:51:57.958953  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:51:57.977795  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 20:51:57.977819  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 20:51:57.988125  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:51:58.065141  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 20:51:58.065163  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 20:51:58.173899  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 20:51:58.173923  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 20:51:58.280880  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 20:51:58.280914  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 20:51:58.289511  240241 start.go:728] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0813 20:51:58.375994  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 20:51:58.376079  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 20:51:58.488006  240241 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:58.488037  240241 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 20:51:58.562447  240241 pod_ready.go:97] error getting pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-94bmz" not found
	I0813 20:51:58.562481  240241 pod_ready.go:81] duration metric: took 1.032368127s waiting for pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace to be "Ready" ...
	E0813 20:51:58.562494  240241 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-94bmz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-94bmz" not found
	I0813 20:51:58.562502  240241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace to be "Ready" ...
	I0813 20:51:58.578755  240241 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 20:51:59.569598  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.79061998s)
	I0813 20:51:59.658034  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69903678s)
	I0813 20:51:59.658141  240241 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210813204407-13784"
	I0813 20:51:59.658099  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.669942348s)
	I0813 20:52:00.558702  240241 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.979881854s)
	I0813 20:51:58.812728  271328 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:51:58.812772  271328 addons.go:344] enableAddons completed in 968.157461ms
	I0813 20:51:59.995308  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:00.560716  240241 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0813 20:52:00.560785  240241 addons.go:344] enableAddons completed in 3.089294462s
	I0813 20:52:00.667954  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:03.098119  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:02.492544  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:04.992816  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:05.098856  240241 pod_ready.go:102] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:07.099285  240241 pod_ready.go:92] pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.099314  240241 pod_ready.go:81] duration metric: took 8.536802711s waiting for pod "coredns-558bd4d5db-gr5g8" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.099327  240241 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.103649  240241 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.103672  240241 pod_ready.go:81] duration metric: took 4.335636ms waiting for pod "etcd-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.103690  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.107793  240241 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.107812  240241 pod_ready.go:81] duration metric: took 4.11268ms waiting for pod "kube-apiserver-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.107827  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.114439  240241 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.114457  240241 pod_ready.go:81] duration metric: took 6.620724ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.114469  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5hsp" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.118338  240241 pod_ready.go:92] pod "kube-proxy-f5hsp" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.118352  240241 pod_ready.go:81] duration metric: took 3.876581ms waiting for pod "kube-proxy-f5hsp" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.118361  240241 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.496572  240241 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace has status "Ready":"True"
	I0813 20:52:07.496591  240241 pod_ready.go:81] duration metric: took 378.224297ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813204407-13784" in "kube-system" namespace to be "Ready" ...
	I0813 20:52:07.496599  240241 pod_ready.go:38] duration metric: took 9.98816095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:52:07.496618  240241 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:52:07.496655  240241 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:52:07.520058  240241 api_server.go:70] duration metric: took 10.048585682s to wait for apiserver process to appear ...
	I0813 20:52:07.520082  240241 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:52:07.520092  240241 api_server.go:239] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0813 20:52:07.524876  240241 api_server.go:265] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0813 20:52:07.525872  240241 api_server.go:139] control plane version: v1.21.3
	I0813 20:52:07.525891  240241 api_server.go:129] duration metric: took 5.802306ms to wait for apiserver health ...
	I0813 20:52:07.525914  240241 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:52:07.699622  240241 system_pods.go:59] 9 kube-system pods found
	I0813 20:52:07.699655  240241 system_pods.go:61] "coredns-558bd4d5db-gr5g8" [de1c554f-b8e1-49db-9f29-54dd16296938] Running
	I0813 20:52:07.699660  240241 system_pods.go:61] "etcd-default-k8s-different-port-20210813204407-13784" [fa938417-d881-4a17-bc90-cea2f1e1eb92] Running
	I0813 20:52:07.699664  240241 system_pods.go:61] "kindnet-xg9rd" [b422811b-55ea-4928-823d-0cf4e7b32f3e] Running
	I0813 20:52:07.699669  240241 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813204407-13784" [4b14160e-22dd-4c1f-b1a6-8a15f3d33d36] Running
	I0813 20:52:07.699673  240241 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813204407-13784" [bbac6064-38a3-4fc1-a3fe-61fc1820b250] Running
	I0813 20:52:07.699677  240241 system_pods.go:61] "kube-proxy-f5hsp" [1aed7397-275e-474d-9287-2624feb99c42] Running
	I0813 20:52:07.699681  240241 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813204407-13784" [0ce6078f-1102-4c32-883d-1a4f95d0f4cc] Running
	I0813 20:52:07.699689  240241 system_pods.go:61] "metrics-server-7c784ccb57-44694" [38bf7ccc-7705-4237-a220-b3b4e39f962d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:07.699694  240241 system_pods.go:61] "storage-provisioner" [d9ea2a9c-7cd6-4367-8335-5bdc8cfbcba1] Running
	I0813 20:52:07.699700  240241 system_pods.go:74] duration metric: took 173.777118ms to wait for pod list to return data ...
	I0813 20:52:07.699714  240241 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:52:07.897248  240241 default_sa.go:45] found service account: "default"
	I0813 20:52:07.897273  240241 default_sa.go:55] duration metric: took 197.547768ms for default service account to be created ...
	I0813 20:52:07.897282  240241 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:52:08.100655  240241 system_pods.go:86] 9 kube-system pods found
	I0813 20:52:08.100687  240241 system_pods.go:89] "coredns-558bd4d5db-gr5g8" [de1c554f-b8e1-49db-9f29-54dd16296938] Running
	I0813 20:52:08.100696  240241 system_pods.go:89] "etcd-default-k8s-different-port-20210813204407-13784" [fa938417-d881-4a17-bc90-cea2f1e1eb92] Running
	I0813 20:52:08.100705  240241 system_pods.go:89] "kindnet-xg9rd" [b422811b-55ea-4928-823d-0cf4e7b32f3e] Running
	I0813 20:52:08.100712  240241 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210813204407-13784" [4b14160e-22dd-4c1f-b1a6-8a15f3d33d36] Running
	I0813 20:52:08.100721  240241 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210813204407-13784" [bbac6064-38a3-4fc1-a3fe-61fc1820b250] Running
	I0813 20:52:08.100727  240241 system_pods.go:89] "kube-proxy-f5hsp" [1aed7397-275e-474d-9287-2624feb99c42] Running
	I0813 20:52:08.100734  240241 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210813204407-13784" [0ce6078f-1102-4c32-883d-1a4f95d0f4cc] Running
	I0813 20:52:08.100746  240241 system_pods.go:89] "metrics-server-7c784ccb57-44694" [38bf7ccc-7705-4237-a220-b3b4e39f962d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:52:08.100756  240241 system_pods.go:89] "storage-provisioner" [d9ea2a9c-7cd6-4367-8335-5bdc8cfbcba1] Running
	I0813 20:52:08.100771  240241 system_pods.go:126] duration metric: took 203.483249ms to wait for k8s-apps to be running ...
	I0813 20:52:08.100783  240241 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:52:08.100832  240241 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:52:08.111772  240241 system_svc.go:56] duration metric: took 10.982724ms WaitForService to wait for kubelet.
	I0813 20:52:08.111793  240241 kubeadm.go:547] duration metric: took 10.64032656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:52:08.111828  240241 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:52:08.297054  240241 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 20:52:08.297080  240241 node_conditions.go:123] node cpu capacity is 8
	I0813 20:52:08.297097  240241 node_conditions.go:105] duration metric: took 185.262995ms to run NodePressure ...
	I0813 20:52:08.297110  240241 start.go:231] waiting for startup goroutines ...
	I0813 20:52:08.342344  240241 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:52:08.344774  240241 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210813204407-13784" cluster and "default" namespace by default
	I0813 20:52:06.993296  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:09.493158  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:11.992153  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:14.493122  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:16.992483  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:18.992950  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	I0813 20:52:20.993464  271328 pod_ready.go:102] pod "coredns-558bd4d5db-7htbx" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 20:45:51 UTC, end at Fri 2021-08-13 20:52:23 UTC. --
	Aug 13 20:52:01 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:01.676533236Z" level=info msg="Image k8s.gcr.io/echoserver:1.4 not found" id=a9e2a656-ac7d-4c58-be93-de67259da5f2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:01 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:01.676974389Z" level=info msg="Pulling image: k8s.gcr.io/echoserver:1.4" id=787ff5b3-c9ad-4b52-a884-2c8042bf6602 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:52:01 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:01.679348629Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:02 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:02.360254108Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 13 20:52:07 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:07.932772424Z" level=info msg="Pulled image: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=787ff5b3-c9ad-4b52-a884-2c8042bf6602 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:52:07 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:07.933617008Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=98c8181d-84cd-4e83-861a-eeedb965b283 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:07 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:07.934851799Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=98c8181d-84cd-4e83-861a-eeedb965b283 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:07 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:07.935641607Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=6ddef9de-c029-4043-aef8-9240ed5afc22 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.113994750Z" level=info msg="Created container 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=6ddef9de-c029-4043-aef8-9240ed5afc22 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.114517395Z" level=info msg="Starting container: 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1" id=264fb6b6-2a13-4c9b-8df5-31fa4ad16144 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.137593469Z" level=info msg="Started container 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=264fb6b6-2a13-4c9b-8df5-31fa4ad16144 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.913338650Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=135d9bec-6af4-46ad-8df5-d7731eaefd2b name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.915076518Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=135d9bec-6af4-46ad-8df5-d7731eaefd2b name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.915651647Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=c7e4140e-11d8-49eb-a3ac-bb539a9c712d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.917465825Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c7e4140e-11d8-49eb-a3ac-bb539a9c712d name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:08.918287149Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=168b57a0-f06a-4968-867a-73364ddf3d70 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.074016106Z" level=info msg="Created container bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=168b57a0-f06a-4968-867a-73364ddf3d70 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.074541973Z" level=info msg="Starting container: bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c" id=fc86ddee-4924-4486-9da6-54841721437e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.098751233Z" level=info msg="Started container bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=fc86ddee-4924-4486-9da6-54841721437e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.917532097Z" level=info msg="Removing container: 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1" id=ff791e4e-85b3-4452-90c6-3f11b8d9bd80 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:09.962333142Z" level=info msg="Removed container 59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949/dashboard-metrics-scraper" id=ff791e4e-85b3-4452-90c6-3f11b8d9bd80 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:14.771551032Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=018bdd6d-25bf-43f9-b48b-dc17c7592f46 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:14.771828072Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=018bdd6d-25bf-43f9-b48b-dc17c7592f46 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:14.772369723Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=83456264-8b9b-4324-b57b-4178055387fd name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 crio[244]: time="2021-08-13 20:52:14.783344441Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	bc763b4bcd6bb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   14 seconds ago      Exited              dashboard-metrics-scraper   1                   dccefa6f9ba7f
	eb7f966beddca       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   22 seconds ago      Running             kubernetes-dashboard        0                   664ec32a83d0f
	0643fb501141e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago      Running             storage-provisioner         0                   2ee6c006120c4
	7bc03b2fb5b97       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   23 seconds ago      Running             coredns                     0                   e036676f80751
	ea2cef428a328       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   24 seconds ago      Running             kube-proxy                  0                   33c4a99332d82
	7cac311527849       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   24 seconds ago      Running             kindnet-cni                 0                   d4e9467385adb
	723cb987e243a       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   46 seconds ago      Running             kube-scheduler              0                   fad404fda12f9
	b889a4cfb9f98       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   46 seconds ago      Running             etcd                        0                   2b6195e25b03e
	a82cc1cca9cca       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   46 seconds ago      Running             kube-apiserver              0                   c49e0bc62636d
	18e273df7ef80       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   46 seconds ago      Running             kube-controller-manager     0                   0c249ea3f831c
	
	* 
	* ==> coredns [7bc03b2fb5b97ade5a5ced7d7239284d3648b6e56d26bae93cf0afb2762b8dd9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 821b10ea3c4cce3a8581cf6a394d92f0
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210813204407-13784
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20210813204407-13784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=default-k8s-different-port-20210813204407-13784
	                    minikube.k8s.io/updated_at=2021_08_13T20_51_44_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:51:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20210813204407-13784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:52:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:51:52 +0000   Fri, 13 Aug 2021 20:51:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:51:52 +0000   Fri, 13 Aug 2021 20:51:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:51:52 +0000   Fri, 13 Aug 2021 20:51:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:51:52 +0000   Fri, 13 Aug 2021 20:51:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20210813204407-13784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                cd5fc00c-b697-4c7f-b544-919a1ee5577b
	  Boot ID:                    fcce678b-15f2-414e-89a0-8db427efa51a
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-gr5g8                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     26s
	  kube-system                 etcd-default-k8s-different-port-20210813204407-13784                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         41s
	  kube-system                 kindnet-xg9rd                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      26s
	  kube-system                 kube-apiserver-default-k8s-different-port-20210813204407-13784             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20210813204407-13784    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-proxy-f5hsp                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-scheduler-default-k8s-different-port-20210813204407-13784             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 metrics-server-7c784ccb57-44694                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         24s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-6h949                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-bdmvt                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 34s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet     Node default-k8s-different-port-20210813204407-13784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet     Node default-k8s-different-port-20210813204407-13784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet     Node default-k8s-different-port-20210813204407-13784 status is now: NodeHasSufficientPID
	  Normal  Starting                 23s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +3.895437] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[ +12.031205] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000003] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +1.787836] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000003] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[ +14.060065] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth132654c8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d2 33 13 cb 90 7c 08 06        .......3...|..
	[  +0.492422] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth0537654e
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 56 dc 40 69 33 08 06        .......V.@i3..
	[Aug13 20:52] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth42b216bb
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1a 75 7c 88 de fd 08 06        .......u|.....
	[  +0.348033] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth3a91f4fd
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1e a0 d8 e2 a6 b4 08 06        ..............
	[  +7.435044] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 2a ae c4 44 83 2c 08 06        ......*..D.,..
	[  +5.490524] IPv4: martian source 10.244.0.3 from 10.96.0.1, on dev br-41c8a2fb43b6
	[  +0.000025] ll header: 00000000: 02 42 7e ed 96 2b 02 42 c0 a8 43 02 08 00        .B~..+.B..C...
	[  +2.047860] IPv4: martian source 10.158.0.4 from 192.168.0.3, on dev br-5952937ba827
	[  +0.000002] ll header: 00000000: 02 42 93 c8 7f 3c 02 42 c0 a8 4c 02 08 00        .B...<.B..L...
	[  +4.034563] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd8e69602
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 93 4a 9f fb 2d 08 06        ........J..-..
	
	* 
	* ==> etcd [b889a4cfb9f98a0e1a75bc248fc206db415ef20ec427118e4f8f92c02d3ced22] <==
	* 2021-08-13 20:51:37.097787 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 20:51:37.097917 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 20:51:37.097967 I | embed: listening for peers on 192.168.67.2:2380
	raft2021/08/13 20:51:37 INFO: 8688e899f7831fc7 is starting a new election at term 1
	raft2021/08/13 20:51:37 INFO: 8688e899f7831fc7 became candidate at term 2
	raft2021/08/13 20:51:37 INFO: 8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2
	raft2021/08/13 20:51:37 INFO: 8688e899f7831fc7 became leader at term 2
	raft2021/08/13 20:51:37 INFO: raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2
	2021-08-13 20:51:37.581719 I | etcdserver: published {Name:default-k8s-different-port-20210813204407-13784 ClientURLs:[https://192.168.67.2:2379]} to cluster 9d8fdeb88b6def78
	2021-08-13 20:51:37.581797 I | embed: ready to serve client requests
	2021-08-13 20:51:37.581908 I | embed: ready to serve client requests
	2021-08-13 20:51:37.582224 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 20:51:37.582509 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 20:51:37.582587 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 20:51:37.583446 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:51:37.583519 I | embed: serving client requests on 192.168.67.2:2379
	2021-08-13 20:51:52.030472 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 20:51:52.236940 W | wal: sync duration of 1.661939646s, expected less than 1s
	2021-08-13 20:51:52.240180 W | etcdserver: request "header:<ID:2289934455866129373 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:1fc77b4148d3fbdc>" with result "size:42" took too long (1.665078515s) to execute
	2021-08-13 20:51:52.241536 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210813204407-13784\" " with result "range_response_count:1 size:6219" took too long (2.168556552s) to execute
	2021-08-13 20:51:52.611813 W | etcdserver: read-only range request "key:\"/registry/minions/default-k8s-different-port-20210813204407-13784\" " with result "range_response_count:1 size:6254" took too long (364.358353ms) to execute
	2021-08-13 20:51:52.611996 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (302.425875ms) to execute
	2021-08-13 20:52:01.031580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:09.373063 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:52:19.373111 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:52:23 up  1:35,  0 users,  load average: 3.52, 2.75, 2.30
	Linux default-k8s-different-port-20210813204407-13784 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a82cc1cca9cca72dbbda29ae20add1371d75089232e162bebda4d7a93f4b7229] <==
	* Trace[1074450952]: ---"Object stored in database" 2172ms (20:51:00.244)
	Trace[1074450952]: [2.172308074s] [2.172308074s] END
	I0813 20:51:52.245449       1 trace.go:205] Trace[657919469]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:49.622) (total time: 2623ms):
	Trace[657919469]: ---"Object stored in database" 2622ms (20:51:00.245)
	Trace[657919469]: [2.623344488s] [2.623344488s] END
	I0813 20:51:52.245930       1 trace.go:205] Trace[1931834109]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-default-k8s-different-port-20210813204407-13784,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:50.072) (total time: 2173ms):
	Trace[1931834109]: ---"About to write a response" 2172ms (20:51:00.244)
	Trace[1931834109]: [2.173555292s] [2.173555292s] END
	I0813 20:51:52.246013       1 trace.go:205] Trace[2043869526]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:50.073) (total time: 2172ms):
	Trace[2043869526]: ---"Object stored in database" 2172ms (20:51:00.245)
	Trace[2043869526]: [2.172760156s] [2.172760156s] END
	I0813 20:51:52.253640       1 trace.go:205] Trace[1982690474]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:50.072) (total time: 2180ms):
	Trace[1982690474]: [2.180775869s] [2.180775869s] END
	I0813 20:51:52.612484       1 trace.go:205] Trace[1766309257]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:51:49.722) (total time: 2889ms):
	Trace[1766309257]: [2.889770193s] [2.889770193s] END
	I0813 20:51:56.936673       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:51:57.338694       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:51:57.338694       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 20:52:02.175480       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 20:52:02.175545       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 20:52:02.175553       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 20:52:17.195451       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:52:17.195501       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:52:17.195512       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [18e273df7ef8068f15e66a27a5594ba380dcfee8d6092dc5991709cec2f3b326] <==
	* I0813 20:51:57.496245       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-94bmz"
	I0813 20:51:57.510664       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gr5g8"
	I0813 20:51:57.564602       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-94bmz"
	I0813 20:51:58.980949       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0813 20:51:59.079090       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 20:51:59.189272       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 20:51:59.288067       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-44694"
	I0813 20:51:59.896636       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 20:51:59.965841       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:59.975187       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:59.976021       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0813 20:51:59.984951       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:51:59.987350       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:51:59.987447       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:00.061090       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:00.067469       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:00.067797       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:00.067916       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:00.067974       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 20:52:00.073131       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 20:52:00.073140       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 20:52:00.073194       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:00.073218       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 20:52:00.083444       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-bdmvt"
	I0813 20:52:00.180237       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-6h949"
	
	* 
	* ==> kube-proxy [ea2cef428a3288c41b5f0fff2df8ec259ada78d0561b68c1c7b7641097b488dd] <==
	* I0813 20:51:59.887013       1 node.go:172] Successfully retrieved node IP: 192.168.67.2
	I0813 20:51:59.887063       1 server_others.go:140] Detected node IP 192.168.67.2
	W0813 20:51:59.887094       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 20:52:00.059369       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 20:52:00.059530       1 server_others.go:212] Using iptables Proxier.
	I0813 20:52:00.059593       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 20:52:00.059646       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 20:52:00.060072       1 server.go:643] Version: v1.21.3
	I0813 20:52:00.061180       1 config.go:224] Starting endpoint slice config controller
	I0813 20:52:00.061236       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 20:52:00.061399       1 config.go:315] Starting service config controller
	I0813 20:52:00.061443       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 20:52:00.068231       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:52:00.070286       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:52:00.162591       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:52:00.163107       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [723cb987e243addf3d24ee09648bd2c1d90f154b96b37e9dd97773224e44d0f9] <==
	* E0813 20:51:41.204699       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:51:41.205724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:51:41.205791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:51:41.205799       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:41.205892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:41.205923       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:51:41.206020       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:41.206034       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:51:41.206097       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:51:41.206135       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:51:41.206246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:51:41.206275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:41.206341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:51:41.206366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:51:42.035864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:51:42.073058       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:51:42.074788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:51:42.084746       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:51:42.150133       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:51:42.179402       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.359310       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.359327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:51:42.386800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:51:42.432634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0813 20:51:44.503811       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:45:51 UTC, end at Fri 2021-08-13 20:52:23 UTC. --
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:00.269156    5751 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2px5\" (UniqueName: \"kubernetes.io/projected/7fd5752b-835b-4cc8-9860-861195aef3d6-kube-api-access-z2px5\") pod \"kubernetes-dashboard-6fcdf4f6d-bdmvt\" (UID: \"7fd5752b-835b-4cc8-9860-861195aef3d6\") "
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:00.369995    5751 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/570c27ad-1f22-4a2b-b4b8-d09736125c6d-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-6h949\" (UID: \"570c27ad-1f22-4a2b-b4b8-d09736125c6d\") "
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:00.370074    5751 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59298\" (UniqueName: \"kubernetes.io/projected/570c27ad-1f22-4a2b-b4b8-d09736125c6d-kube-api-access-59298\") pod \"dashboard-metrics-scraper-8685c45546-6h949\" (UID: \"570c27ad-1f22-4a2b-b4b8-d09736125c6d\") "
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.780770    5751 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.780830    5751 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.781004    5751 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-688pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vo
lumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-44694_kube-system(38bf7ccc-7705-4237-a220-b3b4e39f962d): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.781069    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-44694" podUID=38bf7ccc-7705-4237-a220-b3b4e39f962d
	Aug 13 20:52:00 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:00.894368    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-44694" podUID=38bf7ccc-7705-4237-a220-b3b4e39f962d
	Aug 13 20:52:01 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:01.898896    5751 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 13 20:52:08 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:08.912935    5751 scope.go:111] "RemoveContainer" containerID="59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1"
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:09.916432    5751 scope.go:111] "RemoveContainer" containerID="59d4b05d29ae5f2aea56b8a69b43d6f413ccea7d0eed53324a50ce73ceebaac1"
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:09.916528    5751 scope.go:111] "RemoveContainer" containerID="bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c"
	Aug 13 20:52:09 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:09.916903    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-6h949_kubernetes-dashboard(570c27ad-1f22-4a2b-b4b8-d09736125c6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949" podUID=570c27ad-1f22-4a2b-b4b8-d09736125c6d
	Aug 13 20:52:10 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:10.294281    5751 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7/docker/be304b8d02d7af7ae9bd250c82d1942433fab21e2fe8c6746264de591ad494f7\": RecentStats: unable to find data in memory cache]"
	Aug 13 20:52:10 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:10.919130    5751 scope.go:111] "RemoveContainer" containerID="bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c"
	Aug 13 20:52:10 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:10.919403    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-6h949_kubernetes-dashboard(570c27ad-1f22-4a2b-b4b8-d09736125c6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949" podUID=570c27ad-1f22-4a2b-b4b8-d09736125c6d
	Aug 13 20:52:11 default-k8s-different-port-20210813204407-13784 kubelet[5751]: I0813 20:52:11.921399    5751 scope.go:111] "RemoveContainer" containerID="bc763b4bcd6bb8f25b0a9e5ad708030827a7787b218bc7ba89ffd2f8b6f2146c"
	Aug 13 20:52:11 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:11.921801    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-6h949_kubernetes-dashboard(570c27ad-1f22-4a2b-b4b8-d09736125c6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-6h949" podUID=570c27ad-1f22-4a2b-b4b8-d09736125c6d
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:14.787945    5751 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:14.787986    5751 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:14.788126    5751 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-688pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vo
lumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-44694_kube-system(38bf7ccc-7705-4237-a220-b3b4e39f962d): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Aug 13 20:52:14 default-k8s-different-port-20210813204407-13784 kubelet[5751]: E0813 20:52:14.788185    5751 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.67.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-44694" podUID=38bf7ccc-7705-4237-a220-b3b4e39f962d
	Aug 13 20:52:19 default-k8s-different-port-20210813204407-13784 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:52:19 default-k8s-different-port-20210813204407-13784 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:52:19 default-k8s-different-port-20210813204407-13784 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [eb7f966beddcab227c5a1f1a1ac25ac04ed015d0b9cfa75264ee8239b8c5db6a] <==
	* 2021/08/13 20:52:01 Starting overwatch
	2021/08/13 20:52:01 Using namespace: kubernetes-dashboard
	2021/08/13 20:52:01 Using in-cluster config to connect to apiserver
	2021/08/13 20:52:01 Using secret token for csrf signing
	2021/08/13 20:52:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 20:52:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 20:52:01 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 20:52:01 Generating JWE encryption key
	2021/08/13 20:52:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 20:52:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 20:52:01 Initializing JWE encryption key from synchronized object
	2021/08/13 20:52:01 Creating in-cluster Sidecar client
	2021/08/13 20:52:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 20:52:01 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [0643fb501141e4daad159a56c590d69eaffa9d182a31d9e47c66b7dcb2be547b] <==
	* I0813 20:52:01.090211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:52:01.102535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:52:01.102661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:52:01.111281       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:52:01.111425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813204407-13784_3564c416-fafb-451e-9107-9768ac6653a1!
	I0813 20:52:01.111422       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e62df17-df91-4aef-aba3-3fec816f6922", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210813204407-13784_3564c416-fafb-451e-9107-9768ac6653a1 became leader
	I0813 20:52:01.212604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813204407-13784_3564c416-fafb-451e-9107-9768ac6653a1!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784: exit status 2 (339.354788ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204407-13784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-44694
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204407-13784 describe pod metrics-server-7c784ccb57-44694
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210813204407-13784 describe pod metrics-server-7c784ccb57-44694: exit status 1 (68.313055ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-44694" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20210813204407-13784 describe pod metrics-server-7c784ccb57-44694: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (5.52s)

                                                
                                    

Test pass (226/264)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 10.94
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.06
10 TestDownloadOnly/v1.21.3/json-events 11.18
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.06
17 TestDownloadOnly/v1.22.0-rc.0/json-events 12.89
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.36
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
25 TestDownloadOnlyKic 30.12
26 TestOffline 83.36
29 TestAddons/parallel/Registry 23.77
31 TestAddons/parallel/MetricsServer 5.6
32 TestAddons/parallel/HelmTiller 12.08
33 TestAddons/parallel/Olm 61.01
34 TestAddons/parallel/CSI 78.18
35 TestAddons/parallel/GCPAuth 77.49
36 TestCertOptions 52.47
38 TestForceSystemdFlag 42.87
39 TestForceSystemdEnv 45.42
40 TestKVMDriverInstallOrUpdate 3.24
44 TestErrorSpam/setup 29.26
45 TestErrorSpam/start 0.92
46 TestErrorSpam/status 0.91
47 TestErrorSpam/pause 8.01
48 TestErrorSpam/unpause 1.45
49 TestErrorSpam/stop 6.35
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 68.36
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 5.44
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.19
60 TestFunctional/serial/CacheCmd/cache/add_remote 6.44
61 TestFunctional/serial/CacheCmd/cache/add_local 2.81
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
63 TestFunctional/serial/CacheCmd/cache/list 0.06
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.53
66 TestFunctional/serial/CacheCmd/cache/delete 0.11
67 TestFunctional/serial/MinikubeKubectlCmd 0.11
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 32.32
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1
72 TestFunctional/serial/LogsFileCmd 1.01
74 TestFunctional/parallel/ConfigCmd 0.37
75 TestFunctional/parallel/DashboardCmd 3.83
76 TestFunctional/parallel/DryRun 0.56
77 TestFunctional/parallel/InternationalLanguage 0.24
78 TestFunctional/parallel/StatusCmd 1.01
81 TestFunctional/parallel/ServiceCmd 14.71
82 TestFunctional/parallel/AddonsCmd 0.15
83 TestFunctional/parallel/PersistentVolumeClaim 48.06
85 TestFunctional/parallel/SSHCmd 0.7
86 TestFunctional/parallel/CpCmd 0.7
87 TestFunctional/parallel/MySQL 36.5
88 TestFunctional/parallel/FileSync 0.33
89 TestFunctional/parallel/CertSync 1.6
93 TestFunctional/parallel/NodeLabels 0.06
94 TestFunctional/parallel/LoadImage 5.16
95 TestFunctional/parallel/RemoveImage 4.32
96 TestFunctional/parallel/LoadImageFromFile 2.93
97 TestFunctional/parallel/BuildImage 5.8
98 TestFunctional/parallel/ListImages 0.39
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.9
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
104 TestFunctional/parallel/Version/short 0.06
105 TestFunctional/parallel/Version/components 0.78
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
110 TestFunctional/parallel/ProfileCmd/profile_list 0.35
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
112 TestFunctional/parallel/MountCmd/any-port 28.82
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
119 TestFunctional/parallel/MountCmd/specific-port 1.99
120 TestFunctional/delete_busybox_image 0.08
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.04
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.32
147 TestKicCustomNetwork/create_custom_network 29.29
148 TestKicCustomNetwork/use_default_bridge_network 25.93
149 TestKicExistingNetwork 25.92
150 TestMainNoArgs 0.05
153 TestMultiNode/serial/FreshStart2Nodes 96.63
154 TestMultiNode/serial/DeployApp2Nodes 8.95
156 TestMultiNode/serial/AddNode 26.35
157 TestMultiNode/serial/ProfileList 0.29
158 TestMultiNode/serial/CopyFile 2.37
159 TestMultiNode/serial/StopNode 2.46
160 TestMultiNode/serial/StartAfterStop 31.81
161 TestMultiNode/serial/RestartKeepsNodes 130.05
162 TestMultiNode/serial/DeleteNode 5.48
163 TestMultiNode/serial/StopMultiNode 41.3
164 TestMultiNode/serial/RestartMultiNode 69.46
165 TestMultiNode/serial/ValidateNameConflict 30.92
171 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
172 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 10.43
174 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
175 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 9.6
177 TestDebPackageInstall/install_amd64_debian:10/minikube 0
178 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.52
180 TestDebPackageInstall/install_amd64_debian:9/minikube 0
181 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 7.84
183 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
184 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 18.66
186 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
187 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 18.1
189 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
190 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 18.58
192 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
193 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 17.1
199 TestInsufficientStorage 13.03
202 TestKubernetesUpgrade 109.42
203 TestMissingContainerUpgrade 176.83
212 TestPause/serial/Start 82.42
220 TestNetworkPlugins/group/false 0.74
224 TestPause/serial/SecondStartNoReconfiguration 5.64
227 TestPause/serial/Unpause 0.9
229 TestPause/serial/DeletePaused 3.79
230 TestPause/serial/VerifyDeletedResources 0.8
232 TestStartStop/group/old-k8s-version/serial/FirstStart 317.09
234 TestStartStop/group/no-preload/serial/FirstStart 152.49
236 TestStartStop/group/embed-certs/serial/FirstStart 72.05
238 TestStartStop/group/default-k8s-different-port/serial/FirstStart 67.53
239 TestStartStop/group/embed-certs/serial/DeployApp 12.58
240 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
241 TestStartStop/group/embed-certs/serial/Stop 23.07
242 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
243 TestStartStop/group/embed-certs/serial/SecondStart 362.67
244 TestStartStop/group/no-preload/serial/DeployApp 12.42
245 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.62
246 TestStartStop/group/no-preload/serial/Stop 20.57
247 TestStartStop/group/default-k8s-different-port/serial/DeployApp 12.51
248 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
249 TestStartStop/group/no-preload/serial/SecondStart 359.59
250 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.72
251 TestStartStop/group/default-k8s-different-port/serial/Stop 20.84
252 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.22
253 TestStartStop/group/default-k8s-different-port/serial/SecondStart 379.13
254 TestStartStop/group/old-k8s-version/serial/DeployApp 12.48
255 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.71
256 TestStartStop/group/old-k8s-version/serial/Stop 20.76
257 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
258 TestStartStop/group/old-k8s-version/serial/SecondStart 60.92
259 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
260 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
261 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
264 TestStartStop/group/newest-cni/serial/FirstStart 58.14
265 TestStartStop/group/newest-cni/serial/DeployApp 0
266 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.62
267 TestStartStop/group/newest-cni/serial/Stop 20.82
268 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
269 TestStartStop/group/newest-cni/serial/SecondStart 26.6
270 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
271 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
272 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
274 TestNetworkPlugins/group/auto/Start 71.9
275 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
276 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
277 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
279 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.02
280 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.21
281 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
283 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
284 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.08
285 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.29
287 TestNetworkPlugins/group/auto/KubeletFlags 0.31
288 TestNetworkPlugins/group/auto/NetCatPod 12.27
289 TestNetworkPlugins/group/custom-weave/Start 73.69
290 TestNetworkPlugins/group/auto/DNS 0.17
291 TestNetworkPlugins/group/auto/Localhost 0.13
292 TestNetworkPlugins/group/auto/HairPin 0.14
293 TestNetworkPlugins/group/cilium/Start 100.83
294 TestNetworkPlugins/group/calico/Start 106.54
295 TestNetworkPlugins/group/enable-default-cni/Start 59.34
296 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.37
297 TestNetworkPlugins/group/custom-weave/NetCatPod 24.38
298 TestNetworkPlugins/group/kindnet/Start 80.88
299 TestNetworkPlugins/group/cilium/ControllerPod 5.02
300 TestNetworkPlugins/group/cilium/KubeletFlags 0.34
301 TestNetworkPlugins/group/cilium/NetCatPod 11.41
302 TestNetworkPlugins/group/cilium/DNS 0.24
303 TestNetworkPlugins/group/cilium/Localhost 0.19
304 TestNetworkPlugins/group/cilium/HairPin 0.2
305 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
306 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
307 TestNetworkPlugins/group/bridge/Start 52.2
308 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
309 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
310 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
311 TestNetworkPlugins/group/calico/ControllerPod 5.02
312 TestNetworkPlugins/group/calico/KubeletFlags 0.29
313 TestNetworkPlugins/group/calico/NetCatPod 14.27
314 TestNetworkPlugins/group/calico/DNS 163.92
315 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
316 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
317 TestNetworkPlugins/group/bridge/NetCatPod 12.34
318 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
319 TestNetworkPlugins/group/kindnet/NetCatPod 12.26
320 TestNetworkPlugins/group/bridge/DNS 0.17
321 TestNetworkPlugins/group/bridge/Localhost 0.15
322 TestNetworkPlugins/group/bridge/HairPin 0.14
323 TestNetworkPlugins/group/kindnet/DNS 0.19
324 TestNetworkPlugins/group/kindnet/Localhost 0.16
325 TestNetworkPlugins/group/kindnet/HairPin 0.16
326 TestNetworkPlugins/group/calico/Localhost 0.14
327 TestNetworkPlugins/group/calico/HairPin 0.16
x
+
TestDownloadOnly/v1.14.0/json-events (10.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200750-13784 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200750-13784 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.940641414s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (10.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200750-13784
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200750-13784: exit status 85 (63.63283ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:07:50
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:07:50.177294   13796 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:07:50.177392   13796 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:50.177411   13796 out.go:311] Setting ErrFile to fd 2...
	I0813 20:07:50.177414   13796 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:50.177525   13796 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:07:50.177638   13796 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:07:50.177833   13796 out.go:305] Setting JSON to true
	I0813 20:07:50.212256   13796 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":3033,"bootTime":1628882237,"procs":134,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:07:50.212390   13796 start.go:121] virtualization: kvm guest
	I0813 20:07:50.215274   13796 notify.go:169] Checking for updates...
	I0813 20:07:50.217138   13796 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:07:50.261259   13796 docker.go:132] docker version: linux-19.03.15
	I0813 20:07:50.261351   13796 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:07:50.338882   13796 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:07:50.295450163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:07:50.338968   13796 docker.go:244] overlay module found
	I0813 20:07:50.340825   13796 start.go:278] selected driver: docker
	I0813 20:07:50.340837   13796 start.go:751] validating driver "docker" against <nil>
	I0813 20:07:50.341256   13796 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:07:50.415342   13796 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:07:50.373936592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:07:50.415465   13796 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:07:50.415960   13796 start_flags.go:344] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0813 20:07:50.416042   13796 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 20:07:50.416060   13796 cni.go:93] Creating CNI manager for ""
	I0813 20:07:50.416065   13796 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:07:50.416074   13796 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 20:07:50.416102   13796 start_flags.go:277] config:
	{Name:download-only-20210813200750-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210813200750-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:07:50.417983   13796 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:07:50.419283   13796 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:07:50.419352   13796 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:07:50.500754   13796 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:07:50.500778   13796 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:07:50.570400   13796 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:07:50.570424   13796 cache.go:56] Caching tarball of preloaded images
	I0813 20:07:50.570709   13796 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:07:50.572648   13796 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:07:50.764567   13796 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:70b8731eaaa1b4de2d1cd60021fc1260 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:07:58.874440   13796 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:07:58.874530   13796 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:07:59.980177   13796 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on crio
	I0813 20:07:59.980489   13796 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/download-only-20210813200750-13784/config.json ...
	I0813 20:07:59.980521   13796 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/download-only-20210813200750-13784/config.json: {Name:mk3b4d6622c268d5e61d4d7c4d0723da66df0bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:07:59.980743   13796 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0813 20:07:59.980967   13796 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.14.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200750-13784"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (11.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200750-13784 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200750-13784 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.181647292s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (11.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200750-13784
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200750-13784: exit status 85 (62.827984ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:01
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:01.183650   13940 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:01.183723   13940 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:01.183727   13940 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:01.183730   13940 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:01.183829   13940 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:08:01.183938   13940 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:08:01.184031   13940 out.go:305] Setting JSON to true
	I0813 20:08:01.218420   13940 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":3044,"bootTime":1628882237,"procs":134,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:01.218536   13940 start.go:121] virtualization: kvm guest
	I0813 20:08:01.221114   13940 notify.go:169] Checking for updates...
	I0813 20:08:01.223501   13940 config.go:177] Loaded profile config "download-only-20210813200750-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.14.0
	W0813 20:08:01.223556   13940 start.go:659] api.Load failed for download-only-20210813200750-13784: filestore "download-only-20210813200750-13784": Docker machine "download-only-20210813200750-13784" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:01.223596   13940 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 20:08:01.223627   13940 start.go:659] api.Load failed for download-only-20210813200750-13784: filestore "download-only-20210813200750-13784": Docker machine "download-only-20210813200750-13784" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:01.265594   13940 docker.go:132] docker version: linux-19.03.15
	I0813 20:08:01.265685   13940 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:01.340872   13940 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:01.29911246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:01.340972   13940 docker.go:244] overlay module found
	I0813 20:08:01.343189   13940 start.go:278] selected driver: docker
	I0813 20:08:01.343202   13940 start.go:751] validating driver "docker" against &{Name:download-only-20210813200750-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210813200750-13784 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:01.343661   13940 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:01.417183   13940 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:01.377210126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:01.417762   13940 cni.go:93] Creating CNI manager for ""
	I0813 20:08:01.417785   13940 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:08:01.417794   13940 start_flags.go:277] config:
	{Name:download-only-20210813200750-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210813200750-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:01.419762   13940 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:08:01.421436   13940 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:08:01.421498   13940 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:08:01.502310   13940 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:08:01.502334   13940 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:08:01.569887   13940 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:08:01.569913   13940 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:01.570184   13940 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 20:08:01.572371   13940 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:08:01.764672   13940 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:5b844d0f443dc130a4f324a367701516 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 20:08:10.075518   13940 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:08:10.075631   13940 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200750-13784"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (12.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200750-13784 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200750-13784 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.888391201s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (12.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200750-13784
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200750-13784: exit status 85 (63.499993ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:12
	Running on machine: debian-jenkins-agent-8
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:12.426154   14085 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:12.426220   14085 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:12.426224   14085 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:12.426227   14085 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:12.426325   14085 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:08:12.426436   14085 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:08:12.426537   14085 out.go:305] Setting JSON to true
	I0813 20:08:12.460471   14085 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":3055,"bootTime":1628882237,"procs":134,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:12.460601   14085 start.go:121] virtualization: kvm guest
	I0813 20:08:12.464176   14085 notify.go:169] Checking for updates...
	I0813 20:08:12.466496   14085 config.go:177] Loaded profile config "download-only-20210813200750-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	W0813 20:08:12.466545   14085 start.go:659] api.Load failed for download-only-20210813200750-13784: filestore "download-only-20210813200750-13784": Docker machine "download-only-20210813200750-13784" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:12.466606   14085 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 20:08:12.466659   14085 start.go:659] api.Load failed for download-only-20210813200750-13784: filestore "download-only-20210813200750-13784": Docker machine "download-only-20210813200750-13784" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:12.508219   14085 docker.go:132] docker version: linux-19.03.15
	I0813 20:08:12.508316   14085 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:12.583261   14085 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:12.539944141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:12.583340   14085 docker.go:244] overlay module found
	I0813 20:08:12.585320   14085 start.go:278] selected driver: docker
	I0813 20:08:12.585337   14085 start.go:751] validating driver "docker" against &{Name:download-only-20210813200750-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210813200750-13784 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:12.585838   14085 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:08:12.659471   14085 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-13 20:08:12.617696371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:08:12.660192   14085 cni.go:93] Creating CNI manager for ""
	I0813 20:08:12.660212   14085 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 20:08:12.660222   14085 start_flags.go:277] config:
	{Name:download-only-20210813200750-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210813200750-13784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:12.662356   14085 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 20:08:12.663797   14085 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:08:12.663826   14085 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0813 20:08:12.741689   14085 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0813 20:08:12.741713   14085 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0813 20:08:12.807915   14085 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:08:12.807959   14085 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:12.808233   14085 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:08:12.927081   14085 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:08:13.120314   14085 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:c7902b63f7bbc786f5f337da25a17477 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0813 20:08:21.112853   14085 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:08:21.112963   14085 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 20:08:22.273973   14085 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on crio
	I0813 20:08:22.274225   14085 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/download-only-20210813200750-13784/config.json ...
	I0813 20:08:22.274469   14085 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0813 20:08:22.274737   14085 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/linux/v1.22.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200750-13784"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210813200750-13784
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnlyKic (30.12s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210813200826-13784 --force --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210813200826-13784 --force --alsologtostderr --driver=docker  --container-runtime=crio: (28.689154427s)
helpers_test.go:176: Cleaning up "download-docker-20210813200826-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210813200826-13784
--- PASS: TestDownloadOnlyKic (30.12s)

                                                
                                    
x
+
TestOffline (83.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20210813203846-13784 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20210813203846-13784 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m20.175809435s)
helpers_test.go:176: Cleaning up "offline-crio-20210813203846-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20210813203846-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20210813203846-13784: (3.183766623s)
--- PASS: TestOffline (83.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 14.269036ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-hg76n" [0bb28729-59bb-42d5-ba36-ff47b8317260] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013597879s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-dx5mb" [0c04ef2d-b7fa-4313-8fa9-9bfea7d18a20] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009822926s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210813200856-13784 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210813200856-13784 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210813200856-13784 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.968974794s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 14.444767ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-wb8ql" [eb3d3448-dc22-4ab0-be69-5b5da787bf66] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01310427s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210813200856-13784 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 1.555101ms
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-6wvws" [4c7f47dd-cf8b-4dcb-b03e-a9cbf44ee64c] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007017362s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210813200856-13784 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210813200856-13784 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (6.572824099s)
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.08s)

                                                
                                    
x
+
TestAddons/parallel/Olm (61.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 14.331303ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 16.499456ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:471: packageserver stabilized in 19.952128ms
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "catalog-operator-75d496484d-b7xnt" [7cac127c-0862-467c-ab24-b4c2036b81ed] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.009239991s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:343: "olm-operator-859c88c96-wg7bc" [df7c4d0f-fc2e-4a0e-8cf3-a5d17a0d898b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.008460928s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:343: "packageserver-765fb55d64-6j8sl" [16d158cf-5164-44e1-a39b-bcf3de9e0267] Running
helpers_test.go:343: "packageserver-765fb55d64-6jlql" [e3152009-cf28-4cd2-aeb5-503ab13e2dcd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-765fb55d64-6j8sl" [16d158cf-5164-44e1-a39b-bcf3de9e0267] Running
helpers_test.go:343: "packageserver-765fb55d64-6jlql" [e3152009-cf28-4cd2-aeb5-503ab13e2dcd] Running
helpers_test.go:343: "packageserver-765fb55d64-6j8sl" [16d158cf-5164-44e1-a39b-bcf3de9e0267] Running
helpers_test.go:343: "packageserver-765fb55d64-6jlql" [e3152009-cf28-4cd2-aeb5-503ab13e2dcd] Running
helpers_test.go:343: "packageserver-765fb55d64-6j8sl" [16d158cf-5164-44e1-a39b-bcf3de9e0267] Running
helpers_test.go:343: "packageserver-765fb55d64-6jlql" [e3152009-cf28-4cd2-aeb5-503ab13e2dcd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-765fb55d64-6j8sl" [16d158cf-5164-44e1-a39b-bcf3de9e0267] Running
helpers_test.go:343: "packageserver-765fb55d64-6jlql" [e3152009-cf28-4cd2-aeb5-503ab13e2dcd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-765fb55d64-6j8sl" [16d158cf-5164-44e1-a39b-bcf3de9e0267] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.009880333s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-7njfz" [b3dea7f8-f9f6-4b19-9fd5-da10fb117b44] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.006115491s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210813200856-13784 create -f testdata/etcd.yaml
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200856-13784 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200856-13784 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200856-13784 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200856-13784 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200856-13784 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200856-13784 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200856-13784 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200856-13784 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200856-13784 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200856-13784 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (61.01s)

                                                
                                    
x
+
TestAddons/parallel/CSI (78.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 4.457746ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210813200856-13784 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200856-13784 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210813200856-13784 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [85b2da42-c386-4ba3-97f4-20fbd9c773ec] Pending
helpers_test.go:343: "task-pv-pod" [85b2da42-c386-4ba3-97f4-20fbd9c773ec] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [85b2da42-c386-4ba3-97f4-20fbd9c773ec] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.059671092s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210813200856-13784 create -f testdata/csi-hostpath-driver/snapshot.yaml
2021/08/13 20:12:39 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210813200856-13784 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210813200856-13784 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210813200856-13784 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210813200856-13784 delete pod task-pv-pod: (12.338547235s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210813200856-13784 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210813200856-13784 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200856-13784 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210813200856-13784 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [add2c7b3-f135-46c4-87e9-6e4d53720ae8] Pending
helpers_test.go:343: "task-pv-pod-restore" [add2c7b3-f135-46c4-87e9-6e4d53720ae8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [add2c7b3-f135-46c4-87e9-6e4d53720ae8] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 36.00549685s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210813200856-13784 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210813200856-13784 delete pod task-pv-pod-restore: (2.120538387s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210813200856-13784 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210813200856-13784 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.437865763s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (78.18s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (77.49s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210813200856-13784 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [7213cd9d-9861-4c1b-a7e1-b768b9b0fbd3] Pending
helpers_test.go:343: "busybox" [7213cd9d-9861-4c1b-a7e1-b768b9b0fbd3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [7213cd9d-9861-4c1b-a7e1-b768b9b0fbd3] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 13.01929372s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210813200856-13784 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210813200856-13784 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210813200856-13784 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-fjb6v" [ef90cbad-4758-4500-bc37-0322ef6b25c7] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-fjb6v" [ef90cbad-4758-4500-bc37-0322ef6b25c7] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 27.040032746s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210813200856-13784 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-h95lz" [8744d5ec-7837-47bb-87d5-a68e474c9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-h95lz" [8744d5ec-7837-47bb-87d5-a68e474c9fb8] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 29.007323765s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200856-13784 addons disable gcp-auth --alsologtostderr -v=1: (6.675206292s)
--- PASS: TestAddons/parallel/GCPAuth (77.49s)

                                                
                                    
x
+
TestCertOptions (52.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210813203935-13784 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210813203935-13784 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (48.793021182s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210813203935-13784 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210813203935-13784 config view
helpers_test.go:176: Cleaning up "cert-options-20210813203935-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210813203935-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210813203935-13784: (3.233896267s)
--- PASS: TestCertOptions (52.47s)

                                                
                                    
x
+
TestForceSystemdFlag (42.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210813203846-13784 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210813203846-13784 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.618181881s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210813203846-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210813203846-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210813203846-13784: (3.253718199s)
--- PASS: TestForceSystemdFlag (42.87s)

                                                
                                    
x
+
TestForceSystemdEnv (45.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210813203849-13784 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210813203849-13784 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.13483411s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210813203849-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210813203849-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210813203849-13784: (3.289184686s)
--- PASS: TestForceSystemdEnv (45.42s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.24s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.24s)

                                                
                                    
x
+
TestErrorSpam/setup (29.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210813201822-13784 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813201822-13784 --driver=docker  --container-runtime=crio
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210813201822-13784 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813201822-13784 --driver=docker  --container-runtime=crio: (29.25831806s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (29.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 start --dry-run
--- PASS: TestErrorSpam/start (0.92s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (8.01s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 pause: exit status 80 (2.383519621s)

                                                
                                                
-- stdout --
	* Pausing node nospam-20210813201822-13784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause aec61bb09594d993f97e2493b2d14897144d3aa73b7be18f086c4425ef348584 b3466c6012cd25ada301842e45111d144607dcd70632b63903abc9f6033c7644: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:18:56Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭───────────────────────────────────────────────────────────────────────────────╮
	│                                                                               │
	│    * If the above advice does not help, please let us know:                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                 │
	│                                                                               │
	│    * Please attach the following file to the GitHub issue:                    │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 pause: (4.573092777s)
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 pause
error_spam_test.go:179: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 pause: (1.052717239s)
--- PASS: TestErrorSpam/pause (8.01s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 unpause
--- PASS: TestErrorSpam/unpause (1.45s)

                                                
                                    
x
+
TestErrorSpam/stop (6.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 stop: (6.081435524s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201822-13784 --log_dir /tmp/nospam-20210813201822-13784 stop
--- PASS: TestErrorSpam/stop (6.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/test/nested/copy/13784/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201915-13784 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201915-13784 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.364292045s)
--- PASS: TestFunctional/serial/StartWithProxy (68.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201915-13784 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201915-13784 --alsologtostderr -v=8: (5.443179106s)
functional_test.go:631: soft start took 5.443995052s for "functional-20210813201915-13784" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210813201915-13784 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 cache add k8s.gcr.io/pause:3.3: (2.856152597s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 cache add k8s.gcr.io/pause:latest: (2.855205814s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210813201915-13784 /tmp/functional-20210813201915-13784072614633
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 cache add minikube-local-cache-test:functional-20210813201915-13784
functional_test.go:1024: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 cache add minikube-local-cache-test:functional-20210813201915-13784: (2.544201556s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 cache delete minikube-local-cache-test:functional-20210813201915-13784
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210813201915-13784
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (276.35325ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 cache reload: (1.670983031s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 kubectl -- --context functional-20210813201915-13784 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210813201915-13784 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201915-13784 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813201915-13784 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.320342359s)
functional_test.go:719: restart took 32.320457206s for "functional-20210813201915-13784" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210813201915-13784 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 logs: (1.001461169s)
--- PASS: TestFunctional/serial/LogsCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 logs --file /tmp/functional-20210813201915-13784916254516/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 logs --file /tmp/functional-20210813201915-13784916254516/logs.txt: (1.011830349s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201915-13784 config get cpus: exit status 14 (71.178124ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201915-13784 config get cpus: exit status 14 (56.932936ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813201915-13784 --alsologtostderr -v=1]
2021/08/13 20:21:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813201915-13784 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 56638: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201915-13784 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813201915-13784 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (229.89867ms)

                                                
                                                
-- stdout --
	* [functional-20210813201915-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:21:36.504540   56234 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:21:36.504615   56234 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:21:36.504624   56234 out.go:311] Setting ErrFile to fd 2...
	I0813 20:21:36.504628   56234 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:21:36.504740   56234 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:21:36.504978   56234 out.go:305] Setting JSON to false
	I0813 20:21:36.540584   56234 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":3859,"bootTime":1628882237,"procs":239,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:21:36.540704   56234 start.go:121] virtualization: kvm guest
	I0813 20:21:36.543239   56234 out.go:177] * [functional-20210813201915-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:21:36.544704   56234 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:21:36.546086   56234 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:21:36.547430   56234 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:21:36.548745   56234 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:21:36.549121   56234 config.go:177] Loaded profile config "functional-20210813201915-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:21:36.549514   56234 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:21:36.595696   56234 docker.go:132] docker version: linux-19.03.15
	I0813 20:21:36.595789   56234 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:21:36.672238   56234 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-13 20:21:36.62914116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:21:36.672343   56234 docker.go:244] overlay module found
	I0813 20:21:36.674341   56234 out.go:177] * Using the docker driver based on existing profile
	I0813 20:21:36.674364   56234 start.go:278] selected driver: docker
	I0813 20:21:36.674369   56234 start.go:751] validating driver "docker" against &{Name:functional-20210813201915-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813201915-13784 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-pro
visioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:21:36.674482   56234 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:21:36.674522   56234 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:21:36.674544   56234 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 20:21:36.675885   56234 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:21:36.677892   56234 out.go:177] 
	W0813 20:21:36.678021   56234 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0813 20:21:36.679399   56234 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201915-13784 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813201915-13784 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813201915-13784 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (235.151132ms)

                                                
                                                
-- stdout --
	* [functional-20210813201915-13784] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:21:37.060238   56421 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:21:37.060328   56421 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:21:37.060346   56421 out.go:311] Setting ErrFile to fd 2...
	I0813 20:21:37.060349   56421 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:21:37.060480   56421 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:21:37.060691   56421 out.go:305] Setting JSON to false
	I0813 20:21:37.096323   56421 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":3860,"bootTime":1628882237,"procs":239,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:21:37.096456   56421 start.go:121] virtualization: kvm guest
	I0813 20:21:37.098617   56421 out.go:177] * [functional-20210813201915-13784] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0813 20:21:37.099986   56421 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:21:37.101277   56421 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:21:37.102618   56421 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:21:37.103922   56421 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:21:37.104326   56421 config.go:177] Loaded profile config "functional-20210813201915-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:21:37.104694   56421 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:21:37.151379   56421 docker.go:132] docker version: linux-19.03.15
	I0813 20:21:37.151472   56421 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:21:37.232443   56421 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-13 20:21:37.188083099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:21:37.232538   56421 docker.go:244] overlay module found
	I0813 20:21:37.234462   56421 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0813 20:21:37.234486   56421 start.go:278] selected driver: docker
	I0813 20:21:37.234491   56421 start.go:751] validating driver "docker" against &{Name:functional-20210813201915-13784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813201915-13784 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-pro
visioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:21:37.234603   56421 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:21:37.234635   56421 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:21:37.234654   56421 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0813 20:21:37.236042   56421 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:21:37.237983   56421 out.go:177] 
	W0813 20:21:37.238100   56421 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0813 20:21:37.239348   56421 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 status
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210813201915-13784 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210813201915-13784 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-ftvvk" [163956fd-62e8-4552-a7a6-d64aa166ccfb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-ftvvk" [163956fd-62e8-4552-a7a6-d64aa166ccfb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 13.013902831s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 service list
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.49.2:32277
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:32277
functional_test.go:1431: Attempting to fetch http://192.168.49.2:32277 ...
functional_test.go:1450: http://192.168.49.2:32277: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-ftvvk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32277
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (14.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [2fbf80ea-32ef-4f16-8f0b-108a1b0c10e0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006978934s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210813201915-13784 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210813201915-13784 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813201915-13784 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813201915-13784 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [4dd15ee0-1737-476a-aa56-be6521f4a04d] Pending
helpers_test.go:343: "sp-pod" [4dd15ee0-1737-476a-aa56-be6521f4a04d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [4dd15ee0-1737-476a-aa56-be6521f4a04d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.008756854s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210813201915-13784 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210813201915-13784 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210813201915-13784 delete -f testdata/storage-provisioner/pod.yaml: (2.155449004s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813201915-13784 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [952c3a66-5071-4e1b-86eb-d46ac9bcc265] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [952c3a66-5071-4e1b-86eb-d46ac9bcc265] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [952c3a66-5071-4e1b-86eb-d46ac9bcc265] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0058708s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210813201915-13784 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210813201915-13784 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-9pj2l" [3bd395e8-61d7-4240-82f7-cc422a92f6f7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-9pj2l" [3bd395e8-61d7-4240-82f7-cc422a92f6f7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.009779296s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201915-13784 exec mysql-9bbbc5bbb-9pj2l -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813201915-13784 exec mysql-9bbbc5bbb-9pj2l -- mysql -ppassword -e "show databases;": exit status 1 (273.896066ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201915-13784 exec mysql-9bbbc5bbb-9pj2l -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813201915-13784 exec mysql-9bbbc5bbb-9pj2l -- mysql -ppassword -e "show databases;": exit status 1 (257.19338ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201915-13784 exec mysql-9bbbc5bbb-9pj2l -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813201915-13784 exec mysql-9bbbc5bbb-9pj2l -- mysql -ppassword -e "show databases;": exit status 1 (149.012106ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813201915-13784 exec mysql-9bbbc5bbb-9pj2l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/13784/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo cat /etc/test/nested/copy/13784/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/13784.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo cat /etc/ssl/certs/13784.pem"
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/13784.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo cat /usr/share/ca-certificates/13784.pem"
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/137842.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo cat /etc/ssl/certs/137842.pem"
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/137842.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo cat /usr/share/ca-certificates/137842.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210813201915-13784 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (5.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Done: docker pull busybox:1.33: (2.0353145s)
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210813201915-13784
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 image load docker.io/library/busybox:load-functional-20210813201915-13784

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 image load docker.io/library/busybox:load-functional-20210813201915-13784: (1.829819771s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201915-13784 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210813201915-13784

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:373: (dbg) Done: out/minikube-linux-amd64 ssh -p functional-20210813201915-13784 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210813201915-13784: (1.253249391s)
--- PASS: TestFunctional/parallel/LoadImage (5.16s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Done: docker pull busybox:1.32: (2.033905718s)
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210813201915-13784

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 image load docker.io/library/busybox:remove-functional-20210813201915-13784

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 image load docker.io/library/busybox:remove-functional-20210813201915-13784: (1.369692188s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 image rm docker.io/library/busybox:remove-functional-20210813201915-13784

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201915-13784 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:279: (dbg) Done: docker pull busybox:1.31: (2.048883563s)
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210813201915-13784
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210813201915-13784
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 image load /home/jenkins/workspace/Docker_Linux_crio_integration/busybox.tar
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201915-13784 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (5.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 image build -t localhost/my-image:functional-20210813201915-13784 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813201915-13784 image build -t localhost/my-image:functional-20210813201915-13784 testdata/build: (5.520069589s)
functional_test.go:412: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210813201915-13784 image build -t localhost/my-image:functional-20210813201915-13784 testdata/build:
STEP 1: FROM busybox
STEP 2: RUN true
--> bc452e14aaa
STEP 3: ADD content.txt /
STEP 4: COMMIT localhost/my-image:functional-20210813201915-13784
--> e329e2d09d7
Successfully tagged localhost/my-image:functional-20210813201915-13784
e329e2d09d73eec8da51b944628be1d6f703df57b0d298ee1b5d0cdf5a96ddc4
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813201915-13784 -- sudo crictl inspecti localhost/my-image:functional-20210813201915-13784
--- PASS: TestFunctional/parallel/BuildImage (5.80s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210813201915-13784 image ls:
localhost/minikube-local-cache-test:functional-20210813201915-13784
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo systemctl is-active docker"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo systemctl is-active docker": exit status 1 (497.018185ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo systemctl is-active containerd"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo systemctl is-active containerd": exit status 1 (397.851575ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210813201915-13784 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1245: Took "298.84127ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "54.376973ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1295: Took "293.144167ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "55.014791ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (28.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813201915-13784 /tmp/mounttest855208707:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628886091910143393" to /tmp/mounttest855208707/created-by-test
functional_test_mount_test.go:110: wrote "test-1628886091910143393" to /tmp/mounttest855208707/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628886091910143393" to /tmp/mounttest855208707/test-1628886091910143393
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.260778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 20:21 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 20:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 20:21 test-1628886091910143393
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh cat /mount-9p/test-1628886091910143393
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210813201915-13784 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [dfd10bf4-d717-45f1-9e36-9d320f54d926] Pending
helpers_test.go:343: "busybox-mount" [dfd10bf4-d717-45f1-9e36-9d320f54d926] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [dfd10bf4-d717-45f1-9e36-9d320f54d926] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 26.010577623s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210813201915-13784 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201915-13784 /tmp/mounttest855208707:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (28.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210813201915-13784 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.98.4.241 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210813201915-13784 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813201915-13784 /tmp/mounttest424197032:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.579013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201915-13784 /tmp/mounttest424197032:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh "sudo umount -f /mount-9p": exit status 1 (271.498198ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210813201915-13784 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813201915-13784 /tmp/mounttest424197032:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210813201915-13784
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210813201915-13784
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210813201915-13784
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210813201915-13784
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210813202340-13784 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210813202340-13784 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.746912ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210813202340-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"4b8a1a09-20cd-4cb1-8504-467420b81e1e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig"},"datacontenttype":"application/json","id":"276292ca-cef0-4eac-94a0-6248a45d77cc","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"f17e562a-cf45-4167-a70b-dc25d7cabd6f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube"},"datacontenttype":"application/json","id":"5c31f3dd-6b13-4caa-b331-f7bad9b01ce5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"168d2d6b-4e96-4601-85cd-9f155701b492","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"eea8027c-3bc5-486b-94fe-7ff291b7756e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210813202340-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210813202340-13784
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210813202340-13784 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210813202340-13784 --network=: (26.824589226s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210813202340-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210813202340-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210813202340-13784: (2.425414676s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210813202409-13784 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210813202409-13784 --network=bridge: (23.608051589s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210813202409-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210813202409-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210813202409-13784: (2.284609442s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.93s)

                                                
                                    
x
+
TestKicExistingNetwork (25.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210813202435-13784 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210813202435-13784 --network=existing-network: (23.250741395s)
helpers_test.go:176: Cleaning up "existing-network-20210813202435-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210813202435-13784
E0813 20:25:00.052732   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210813202435-13784: (2.426637195s)
--- PASS: TestKicExistingNetwork (25.92s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202501-13784 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0813 20:26:16.135433   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:16.140746   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:16.151002   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:16.171306   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:16.211747   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:16.292151   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:16.452531   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:16.773142   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:17.413901   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:18.694835   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:21.255404   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:26.375928   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:26:36.617070   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202501-13784 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m36.101010251s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- rollout status deployment/busybox: (6.612011644s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-7gjcw -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-nhdx8 -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-7gjcw -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-nhdx8 -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-7gjcw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202501-13784 -- exec busybox-84b6686758-nhdx8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813202501-13784 -v 3 --alsologtostderr
E0813 20:26:57.098143   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:27:16.207517   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210813202501-13784 -v 3 --alsologtostderr: (25.620855822s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.35s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 cp testdata/cp-test.txt multinode-20210813202501-13784-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 ssh -n multinode-20210813202501-13784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 cp testdata/cp-test.txt multinode-20210813202501-13784-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 ssh -n multinode-20210813202501-13784-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202501-13784 node stop m03: (1.311765988s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202501-13784 status: exit status 7 (578.12445ms)

                                                
                                                
-- stdout --
	multinode-20210813202501-13784
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813202501-13784-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813202501-13784-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --alsologtostderr: exit status 7 (566.913012ms)

                                                
                                                
-- stdout --
	multinode-20210813202501-13784
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813202501-13784-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813202501-13784-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:27:25.859690   89666 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:27:25.859810   89666 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:27:25.859817   89666 out.go:311] Setting ErrFile to fd 2...
	I0813 20:27:25.859821   89666 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:27:25.859928   89666 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:27:25.860094   89666 out.go:305] Setting JSON to false
	I0813 20:27:25.860117   89666 mustload.go:65] Loading cluster: multinode-20210813202501-13784
	I0813 20:27:25.860405   89666 config.go:177] Loaded profile config "multinode-20210813202501-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:27:25.860421   89666 status.go:253] checking status of multinode-20210813202501-13784 ...
	I0813 20:27:25.860783   89666 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Status}}
	I0813 20:27:25.899140   89666 status.go:328] multinode-20210813202501-13784 host status = "Running" (err=<nil>)
	I0813 20:27:25.899175   89666 host.go:66] Checking if "multinode-20210813202501-13784" exists ...
	I0813 20:27:25.899437   89666 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202501-13784
	I0813 20:27:25.936182   89666 host.go:66] Checking if "multinode-20210813202501-13784" exists ...
	I0813 20:27:25.936457   89666 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:27:25.936496   89666 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784
	I0813 20:27:25.974315   89666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784/id_rsa Username:docker}
	I0813 20:27:26.069867   89666 ssh_runner.go:149] Run: systemctl --version
	I0813 20:27:26.073286   89666 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:27:26.083140   89666 kubeconfig.go:93] found "multinode-20210813202501-13784" server: "https://192.168.49.2:8443"
	I0813 20:27:26.083169   89666 api_server.go:164] Checking apiserver status ...
	I0813 20:27:26.083197   89666 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:27:26.103018   89666 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1303/cgroup
	I0813 20:27:26.110583   89666 api_server.go:180] apiserver freezer: "3:freezer:/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/system.slice/crio-d03677368b4fc8505ede42659d596734444bbadce38d84eab2125a4bf8b9ba86.scope"
	I0813 20:27:26.110664   89666 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/7c315ec5ab8c1f02f7bdacd2f36ba3a35b9fe956297a7309d9b44840e7b3c4d3/system.slice/crio-d03677368b4fc8505ede42659d596734444bbadce38d84eab2125a4bf8b9ba86.scope/freezer.state
	I0813 20:27:26.116811   89666 api_server.go:202] freezer state: "THAWED"
	I0813 20:27:26.116836   89666 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 20:27:26.122366   89666 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 20:27:26.122391   89666 status.go:419] multinode-20210813202501-13784 apiserver status = Running (err=<nil>)
	I0813 20:27:26.122402   89666 status.go:255] multinode-20210813202501-13784 status: &{Name:multinode-20210813202501-13784 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:27:26.122421   89666 status.go:253] checking status of multinode-20210813202501-13784-m02 ...
	I0813 20:27:26.122665   89666 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784-m02 --format={{.State.Status}}
	I0813 20:27:26.160856   89666 status.go:328] multinode-20210813202501-13784-m02 host status = "Running" (err=<nil>)
	I0813 20:27:26.160882   89666 host.go:66] Checking if "multinode-20210813202501-13784-m02" exists ...
	I0813 20:27:26.161191   89666 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813202501-13784-m02
	I0813 20:27:26.198884   89666 host.go:66] Checking if "multinode-20210813202501-13784-m02" exists ...
	I0813 20:27:26.199167   89666 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:27:26.199214   89666 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813202501-13784-m02
	I0813 20:27:26.238802   89666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202501-13784-m02/id_rsa Username:docker}
	I0813 20:27:26.329947   89666 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:27:26.338422   89666 status.go:255] multinode-20210813202501-13784-m02 status: &{Name:multinode-20210813202501-13784-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:27:26.338455   89666 status.go:253] checking status of multinode-20210813202501-13784-m03 ...
	I0813 20:27:26.338693   89666 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784-m03 --format={{.State.Status}}
	I0813 20:27:26.375599   89666 status.go:328] multinode-20210813202501-13784-m03 host status = "Stopped" (err=<nil>)
	I0813 20:27:26.375621   89666 status.go:341] host is not running, skipping remaining checks
	I0813 20:27:26.375626   89666 status.go:255] multinode-20210813202501-13784-m03 status: &{Name:multinode-20210813202501-13784-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 node start m03 --alsologtostderr
E0813 20:27:38.059360   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:27:43.896624   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202501-13784 node start m03 --alsologtostderr: (30.958548002s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (130.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202501-13784
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210813202501-13784
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210813202501-13784: (42.269469119s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202501-13784 --wait=true -v=8 --alsologtostderr
E0813 20:28:59.980607   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202501-13784 --wait=true -v=8 --alsologtostderr: (1m27.672211154s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202501-13784
--- PASS: TestMultiNode/serial/RestartKeepsNodes (130.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202501-13784 node delete m03: (4.799604372s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (41.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202501-13784 stop: (41.046760315s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202501-13784 status: exit status 7 (127.112147ms)

                                                
                                                
-- stdout --
	multinode-20210813202501-13784
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813202501-13784-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --alsologtostderr: exit status 7 (127.940994ms)

                                                
                                                
-- stdout --
	multinode-20210813202501-13784
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813202501-13784-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:30:54.937363  102469 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:30:54.937466  102469 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:30:54.937477  102469 out.go:311] Setting ErrFile to fd 2...
	I0813 20:30:54.937480  102469 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:30:54.937616  102469 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:30:54.937785  102469 out.go:305] Setting JSON to false
	I0813 20:30:54.937807  102469 mustload.go:65] Loading cluster: multinode-20210813202501-13784
	I0813 20:30:54.938120  102469 config.go:177] Loaded profile config "multinode-20210813202501-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:30:54.938137  102469 status.go:253] checking status of multinode-20210813202501-13784 ...
	I0813 20:30:54.938553  102469 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784 --format={{.State.Status}}
	I0813 20:30:54.975738  102469 status.go:328] multinode-20210813202501-13784 host status = "Stopped" (err=<nil>)
	I0813 20:30:54.975758  102469 status.go:341] host is not running, skipping remaining checks
	I0813 20:30:54.975763  102469 status.go:255] multinode-20210813202501-13784 status: &{Name:multinode-20210813202501-13784 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:30:54.975820  102469 status.go:253] checking status of multinode-20210813202501-13784-m02 ...
	I0813 20:30:54.976054  102469 cli_runner.go:115] Run: docker container inspect multinode-20210813202501-13784-m02 --format={{.State.Status}}
	I0813 20:30:55.014136  102469 status.go:328] multinode-20210813202501-13784-m02 host status = "Stopped" (err=<nil>)
	I0813 20:30:55.014157  102469 status.go:341] host is not running, skipping remaining checks
	I0813 20:30:55.014163  102469 status.go:255] multinode-20210813202501-13784-m02 status: &{Name:multinode-20210813202501-13784-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (41.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (69.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202501-13784 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0813 20:31:16.135738   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:31:43.821045   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202501-13784 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m8.718608535s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202501-13784 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (69.46s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202501-13784
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202501-13784-m02 --driver=docker  --container-runtime=crio
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210813202501-13784-m02 --driver=docker  --container-runtime=crio: exit status 14 (106.231968ms)

                                                
                                                
-- stdout --
	* [multinode-20210813202501-13784-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210813202501-13784-m02' is duplicated with machine name 'multinode-20210813202501-13784-m02' in profile 'multinode-20210813202501-13784'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202501-13784-m03 --driver=docker  --container-runtime=crio
E0813 20:32:16.206944   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202501-13784-m03 --driver=docker  --container-runtime=crio: (27.651786139s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813202501-13784
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210813202501-13784: exit status 80 (268.796499ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210813202501-13784
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210813202501-13784-m03 already exists in multinode-20210813202501-13784-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210813202501-13784-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210813202501-13784-m03: (2.837200203s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.92s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.43s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.434650543s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.43s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.6s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.60161745s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.60s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.52s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.514901046s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.52s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (7.84s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (7.84369021s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (7.84s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (18.66s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (18.654824685s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (18.66s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (18.1s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (18.097157883s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (18.10s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (18.58s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (18.576438902s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (18.58s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (17.1s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (17.099551886s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (17.10s)

                                                
                                    
x
+
TestInsufficientStorage (13.03s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210813203833-13784 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E0813 20:38:39.257635   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210813203833-13784 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.266415007s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210813203833-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"53568d0f-ae1f-44ac-8515-42aed2b921fe","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig"},"datacontenttype":"application/json","id":"e69c053c-f5e8-4b3a-9b97-2685aa61e64f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"5f8f9723-14e6-4d47-958b-53e4b849f601","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube"},"datacontenttype":"application/json","id":"6330630f-576c-4069-a20c-e9346d2c63cc","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"a1d9f470-b179-46ae-bc55-abd9c071264c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"e326017d-2ba4-4eeb-829e-34fe80522d7d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"2511a78d-9738-4e84-a483-0b7c9dfdc624","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"20664694-09a8-4d46-81d0-8a5817043c2f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"4d8a4e54-95b3-47b7-8cbd-003d92fc59b4","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210813203833-13784 in cluster insufficient-storage-20210813203833-13784","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"d9ea52de-bfd5-4c8e-a5b8-7ce9fe529f82","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"36c1d660-a6d7-4330-a41d-e55832a2db92","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"b4141816-2b29-4387-9af2-308580ecd967","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"43d3dd92-9c94-44cb-b95a-59091bdba957","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210813203833-13784 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210813203833-13784 --output=json --layout=cluster: exit status 7 (275.403298ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210813203833-13784","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210813203833-13784","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:38:39.906387  152780 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210813203833-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210813203833-13784 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210813203833-13784 --output=json --layout=cluster: exit status 7 (271.932595ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210813203833-13784","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210813203833-13784","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:38:40.178803  152838 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210813203833-13784" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	E0813 20:38:40.189832  152838 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/insufficient-storage-20210813203833-13784/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210813203833-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210813203833-13784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210813203833-13784: (6.213684385s)
--- PASS: TestInsufficientStorage (13.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (109.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204027-13784 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204027-13784 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.211021914s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813204027-13784
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813204027-13784: (2.302202832s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210813204027-13784 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210813204027-13784 status --format={{.Host}}: exit status 7 (102.978211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204027-13784 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204027-13784 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.72029919s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210813204027-13784 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204027-13784 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204027-13784 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=crio: exit status 106 (119.381947ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210813204027-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210813204027-13784
	    minikube start -p kubernetes-upgrade-20210813204027-13784 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813204027-137842 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813204027-13784 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204027-13784 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813204027-13784 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.502620087s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210813204027-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813204027-13784

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813204027-13784: (3.398310465s)
--- PASS: TestKubernetesUpgrade (109.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (176.83s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.082419994.exe start -p missing-upgrade-20210813203846-13784 --memory=2200 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Done: /tmp/minikube-v1.9.1.082419994.exe start -p missing-upgrade-20210813203846-13784 --memory=2200 --driver=docker  --container-runtime=crio: (1m44.721606405s)
version_upgrade_test.go:320: (dbg) Run:  docker stop missing-upgrade-20210813203846-13784
version_upgrade_test.go:320: (dbg) Done: docker stop missing-upgrade-20210813203846-13784: (10.480769304s)
version_upgrade_test.go:325: (dbg) Run:  docker rm missing-upgrade-20210813203846-13784
version_upgrade_test.go:331: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210813203846-13784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:331: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210813203846-13784 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.7143288s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210813203846-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210813203846-13784

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210813203846-13784: (2.762792407s)
--- PASS: TestMissingContainerUpgrade (176.83s)

                                                
                                    
x
+
TestPause/serial/Start (82.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813203929-13784 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813203929-13784 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.415192198s)
--- PASS: TestPause/serial/Start (82.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210813204010-13784 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210813204010-13784 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (276.697371ms)

                                                
                                                
-- stdout --
	* [false-20210813204010-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:40:10.788791  174572 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:40:10.788885  174572 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:10.788906  174572 out.go:311] Setting ErrFile to fd 2...
	I0813 20:40:10.788910  174572 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:40:10.789005  174572 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:40:10.789250  174572 out.go:305] Setting JSON to false
	I0813 20:40:10.833018  174572 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":4973,"bootTime":1628882237,"procs":262,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:40:10.833141  174572 start.go:121] virtualization: kvm guest
	I0813 20:40:10.835845  174572 out.go:177] * [false-20210813204010-13784] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:40:10.836002  174572 notify.go:169] Checking for updates...
	I0813 20:40:10.837383  174572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:40:10.838946  174572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:40:10.840485  174572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:40:10.841926  174572 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:40:10.842624  174572 config.go:177] Loaded profile config "cert-options-20210813203935-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:40:10.842773  174572 config.go:177] Loaded profile config "missing-upgrade-20210813203846-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0813 20:40:10.842906  174572 config.go:177] Loaded profile config "pause-20210813203929-13784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.21.3
	I0813 20:40:10.842962  174572 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:40:10.906054  174572 docker.go:132] docker version: linux-19.03.15
	I0813 20:40:10.906149  174572 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 20:40:11.000013  174572 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:185 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2021-08-13 20:40:10.949900137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 20:40:11.000112  174572 docker.go:244] overlay module found
	I0813 20:40:11.002562  174572 out.go:177] * Using the docker driver based on user configuration
	I0813 20:40:11.002598  174572 start.go:278] selected driver: docker
	I0813 20:40:11.002606  174572 start.go:751] validating driver "docker" against <nil>
	I0813 20:40:11.002632  174572 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 20:40:11.002711  174572 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 20:40:11.002733  174572 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 20:40:11.004423  174572 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 20:40:11.006648  174572 out.go:177] 
	W0813 20:40:11.006827  174572 out.go:242] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0813 20:40:11.008204  174572 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210813204010-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210813204010-13784
--- PASS: TestNetworkPlugins/group/false (0.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813203929-13784 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813203929-13784 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.629561397s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.64s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210813203929-13784 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210813203929-13784 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210813203929-13784 --alsologtostderr -v=5: (3.793742362s)
--- PASS: TestPause/serial/DeletePaused (3.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.8s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210813203929-13784
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210813203929-13784: exit status 1 (36.39479ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210813203929-13784

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (317.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813204214-13784 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0
E0813 20:42:16.206761   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813204214-13784 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (5m17.093308628s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (317.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (152.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813204216-13784 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
E0813 20:42:39.181599   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813204216-13784 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (2m32.489065654s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (152.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813204258-13784 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813204258-13784 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (1m12.045069284s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (67.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813204407-13784 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813204407-13784 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (1m7.534199682s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (67.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813204258-13784 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [47e5e8dc-dcfa-4cbc-bc74-b8834ca4db35] Pending
helpers_test.go:343: "busybox" [47e5e8dc-dcfa-4cbc-bc74-b8834ca4db35] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [47e5e8dc-dcfa-4cbc-bc74-b8834ca4db35] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.01289779s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813204258-13784 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210813204258-13784 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210813204258-13784 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (23.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210813204258-13784 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210813204258-13784 --alsologtostderr -v=3: (23.07088379s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (23.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784: exit status 7 (109.777675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210813204258-13784 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (362.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813204258-13784 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813204258-13784 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (6m2.347093286s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813204258-13784 -n embed-certs-20210813204258-13784
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (362.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813204216-13784 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [383269d8-485a-4bef-aff5-359cf0da4275] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [383269d8-485a-4bef-aff5-359cf0da4275] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.012325952s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813204216-13784 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813204216-13784 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210813204216-13784 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210813204216-13784 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210813204216-13784 --alsologtostderr -v=3: (20.568019368s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204407-13784 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [3d1a5748-12f3-45ad-82a2-5ae92e1dbfd5] Pending
helpers_test.go:343: "busybox" [3d1a5748-12f3-45ad-82a2-5ae92e1dbfd5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [3d1a5748-12f3-45ad-82a2-5ae92e1dbfd5] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 12.012412205s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204407-13784 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204216-13784 -n no-preload-20210813204216-13784
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204216-13784 -n no-preload-20210813204216-13784: exit status 7 (92.447054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210813204216-13784 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (359.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813204216-13784 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813204216-13784 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (5m59.225116094s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813204216-13784 -n no-preload-20210813204216-13784
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (359.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813204407-13784 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204407-13784 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813204407-13784 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813204407-13784 --alsologtostderr -v=3: (20.837090201s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784: exit status 7 (112.962166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210813204407-13784 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (379.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813204407-13784 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3
E0813 20:46:16.135795   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
E0813 20:47:16.206907   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813204407-13784 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (6m18.804978931s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813204407-13784 -n default-k8s-different-port-20210813204407-13784
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (379.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813204214-13784 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ae848ac1-fc77-11eb-8f20-0242bfc25c59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [ae848ac1-fc77-11eb-8f20-0242bfc25c59] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.010507916s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813204214-13784 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210813204214-13784 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210813204214-13784 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210813204214-13784 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210813204214-13784 --alsologtostderr -v=3: (20.759174553s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204214-13784 -n old-k8s-version-20210813204214-13784
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204214-13784 -n old-k8s-version-20210813204214-13784: exit status 7 (94.887883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210813204214-13784 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (60.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813204214-13784 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813204214-13784 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (1m0.584703743s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813204214-13784 -n old-k8s-version-20210813204214-13784
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (60.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-2jdmv" [d75463dd-fc77-11eb-b136-02429fe89262] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01140125s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-2jdmv" [d75463dd-fc77-11eb-b136-02429fe89262] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005678522s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210813204214-13784 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210813204214-13784 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813204926-13784 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813204926-13784 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (58.136649918s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813204926-13784 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210813204926-13784 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210813204926-13784 --alsologtostderr -v=3: (20.821820738s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813204926-13784 -n newest-cni-20210813204926-13784
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813204926-13784 -n newest-cni-20210813204926-13784: exit status 7 (101.702961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210813204926-13784 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813204926-13784 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813204926-13784 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (26.201464071s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813204926-13784 -n newest-cni-20210813204926-13784
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-hrz2v" [5942b252-d775-410a-a974-03b117ab6f29] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010764s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-hrz2v" [5942b252-d775-410a-a974-03b117ab6f29] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006828769s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210813204258-13784 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210813204258-13784 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210813204009-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210813204009-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio: (1m11.896601061s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210813204926-13784 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-n9kxj" [1fec0fc5-9088-4c74-88a3-597904c35c08] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-n9kxj" [1fec0fc5-9088-4c74-88a3-597904c35c08] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.01279825s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-n9kxj" [1fec0fc5-9088-4c74-88a3-597904c35c08] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006718094s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210813204216-13784 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210813204216-13784 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bdmvt" [7fd5752b-835b-4cc8-9860-861195aef3d6] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01130815s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bdmvt" [7fd5752b-835b-4cc8-9860-861195aef3d6] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005412978s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210813204407-13784 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210813204407-13784 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210813204009-13784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210813204009-13784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-vc8b8" [04fd00ce-2905-45fd-9e04-a9ab53b7209b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-vc8b8" [04fd00ce-2905-45fd-9e04-a9ab53b7209b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006525554s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (73.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210813204011-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210813204011-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio: (1m13.688593863s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (73.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210813204009-13784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210813204009-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210813204009-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (100.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210813204011-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio
E0813 20:52:42.668132   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:52:52.908930   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210813204011-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio: (1m40.829711596s)
--- PASS: TestNetworkPlugins/group/cilium/Start (100.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (106.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210813204011-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210813204011-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio: (1m46.53780621s)
--- PASS: TestNetworkPlugins/group/calico/Start (106.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210813204010-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210813204010-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio: (59.335939362s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210813204011-13784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (24.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210813204011-13784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context custom-weave-20210813204011-13784 replace --force -f testdata/netcat-deployment.yaml: (9.652303906s)
E0813 20:53:54.350464   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-nv7br" [d49bc9e9-d74d-48ae-8a46-e16bd895ff39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-nv7br" [d49bc9e9-d74d-48ae-8a46-e16bd895ff39] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 11.014284897s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (24.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210813204010-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210813204010-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.87490847s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-26kbl" [ec4a8c19-51a8-4aad-bf47-028a50c6e7c4] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.018804528s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210813204011-13784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210813204011-13784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-xd9g6" [38e96961-43f0-4751-bd1a-11083f1875a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-xd9g6" [38e96961-43f0-4751-bd1a-11083f1875a7] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.006534176s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210813204011-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210813204011-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210813204010-13784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210813204010-13784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-lzcjv" [cbbfd473-6e20-4a3f-9169-3090dce36a43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-lzcjv" [cbbfd473-6e20-4a3f-9169-3090dce36a43] Running
E0813 20:54:49.711504   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:54:49.716782   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:54:49.727027   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:54:49.747289   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:54:49.787551   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:54:49.868294   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:54:50.029336   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:54:50.349480   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.007336389s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210813204010-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210813204010-13784 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio: (52.200020803s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813204010-13784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210813204010-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210813204010-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-ltzs5" [82d3d411-a6e1-49d2-8b46-358d98992fb4] Running
E0813 20:55:10.193097   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.013899109s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210813204011-13784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210813204011-13784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-ww4b8" [020c5de1-fb63-42bc-8fd7-944b3aa1b9c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 20:55:15.605134   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:15.610419   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:15.620669   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:15.640959   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:15.681219   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:15.761559   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:15.922314   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:16.243151   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:16.271257   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:55:16.884211   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:18.165226   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:19.258021   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-ww4b8" [020c5de1-fb63-42bc-8fd7-944b3aa1b9c0] Running
E0813 20:55:20.725449   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:55:25.845629   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.005532734s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (163.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (65.669871ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (66.50618ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (64.202597ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0813 20:55:30.673770   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (56.564521ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (63.80095ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (76.514594ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (62.749568ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (58.825548ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0813 20:56:11.634362   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:56:16.135588   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813201915-13784/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (59.242105ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0813 20:56:37.526745   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (56.819834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (56.28533ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0813 20:57:16.207402   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200856-13784/client.crt: no such file or directory
E0813 20:57:24.007935   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:24.013188   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:24.023406   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:24.043670   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:24.083878   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:24.164251   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:24.324701   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:24.645280   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:25.286225   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:26.566721   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:29.127400   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:32.425866   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:57:33.554941   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813204216-13784/client.crt: no such file or directory
E0813 20:57:34.247608   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:44.487767   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
E0813 20:57:59.447372   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory
E0813 20:58:00.111564   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813204214-13784/client.crt: no such file or directory
E0813 20:58:04.968418   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813204009-13784/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (163.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-tn24k" [d0c3cf7c-94cf-431c-80dd-cf95e2030f43] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.011996827s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210813204010-13784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210813204010-13784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-86jhp" [c217e880-8171-4d10-82f5-37aa161ad536] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 20:55:36.085955   13784 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-10823-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813204407-13784/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-86jhp" [c217e880-8171-4d10-82f5-37aa161ad536] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.07737718s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210813204010-13784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210813204010-13784 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-856q5" [6ae5df0a-f288-477d-a2bb-c9bda8b1ca74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-856q5" [6ae5df0a-f288-477d-a2bb-c9bda8b1ca74] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005944485s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210813204010-13784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210813204010-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210813204010-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210813204010-13784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210813204010-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210813204010-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210813204011-13784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    

Test skip (24/264)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210813204407-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210813204407-13784
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as crio container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210813204009-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210813204009-13784
--- SKIP: TestNetworkPlugins/group/kubenet (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210813204010-13784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20210813204010-13784
--- SKIP: TestNetworkPlugins/group/flannel (0.48s)

                                                
                                    
Copied to clipboard